Quantum probability assignment limited by relativistic causality.
Han, Yeong Deok; Choi, Taeseung
2016-03-14
Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.
The Impact of Exceeding TANF Time Limits on the Access to Healthcare of Low-Income Mothers.
Narain, Kimberly; Ettner, Susan
2017-01-01
The objective of this article is to estimate the relationship of exceeding Temporary Assistance for Needy Families (TANF) time limits, with health insurance, healthcare, and health outcomes. The authors use Heckman selection models that exploit variability in state time-limit duration and timing of policy implementation as identifying exclusion restrictions to adjust the effect estimates of exceeding time limits for possible correlations between the probability of exceeding time limits and unobservable factors influencing the outcomes. The authors find that exceeding time limits decreases the predicted probability of Medicaid coverage, increases the predicted probability of being uninsured, and decreases the predicted probability of annual medical provider contact.
The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghumman, S. S.
The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.
Stochastic entrainment of a stochastic oscillator.
Wang, Guanyu; Peskin, Charles S
2015-01-01
In this work, we consider a stochastic oscillator described by a discrete-state continuous-time Markov chain, in which the states are arranged in a circle, and there is a constant probability per unit time of jumping from one state to the next in a specified direction around the circle. At each of a sequence of equally spaced times, the oscillator has a specified probability of being reset to a particular state. The focus of this work is the entrainment of the oscillator by this periodic but stochastic stimulus. We consider a distinguished limit, in which (i) the number of states of the oscillator approaches infinity, as does the probability per unit time of jumping from one state to the next, so that the natural mean period of the oscillator remains constant, (ii) the resetting probability approaches zero, and (iii) the period of the resetting signal approaches a multiple, by a ratio of small integers, of the natural mean period of the oscillator. In this distinguished limit, we use analytic and numerical methods to study the extent to which entrainment occurs.
Convergence of Transition Probability Matrix in CLVMarkov Models
NASA Astrophysics Data System (ADS)
Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.
2018-04-01
A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.
Almost Perfect Teleportation Using 4-PARTITE Entangled States
NASA Astrophysics Data System (ADS)
Prakash, H.; Chandra, N.; Prakash, R.; Shivani
In a recent paper N. Ba An (Phys. Rev. A 68, 022321 (2003)) proposed a scheme to teleport a single particle state, which is superposition of coherent states |α> and |-α> using a 4-partite state, a beam splitter, and phase shifters and concluded that the probability for successful teleportation is only 1/4 in the limit |α| → 0 and 1/2 in the limit |α| → ∞. In this paper, we modify this scheme and find that an almost perfect success can be achieved if |α|2 is appreciable. For example, for |α|2 = 5, the minimum of average fidelity for teleportation, which is the minimum of sum of the product of probability for occurrence of any case and the corresponding fidelity evaluated for an arbitrary chosen information state, is 0.9999.
Probability distributions for Markov chain based quantum walks
NASA Astrophysics Data System (ADS)
Balu, Radhakrishnan; Liu, Chaobin; Venegas-Andraca, Salvador E.
2018-01-01
We analyze the probability distributions of the quantum walks induced from Markov chains by Szegedy (2004). The first part of this paper is devoted to the quantum walks induced from finite state Markov chains. It is shown that the probability distribution on the states of the underlying Markov chain is always convergent in the Cesaro sense. In particular, we deduce that the limiting distribution is uniform if the transition matrix is symmetric. In the case of a non-symmetric Markov chain, we exemplify that the limiting distribution of the quantum walk is not necessarily identical with the stationary distribution of the underlying irreducible Markov chain. The Szegedy scheme can be extended to infinite state Markov chains (random walks). In the second part, we formulate the quantum walk induced from a lazy random walk on the line. We then obtain the weak limit of the quantum walk. It is noted that the current quantum walk appears to spread faster than its counterpart-quantum walk on the line driven by the Grover coin discussed in literature. The paper closes with an outlook on possible future directions.
Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi
2003-01-13
We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.
Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M.
2018-03-01
Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.
Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States.
de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M
2018-03-23
Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
A methodology for stochastic analysis of share prices as Markov chains with finite states.
Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey
2014-01-01
Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.
Competing of Sznajd and Voter Dynamics in the Watts-Strogatz Network
NASA Astrophysics Data System (ADS)
Rybak, M.; Kułakowski, K.
We investigate the Watts-Strogatz network with the clustering coefficient C dependent on the rewiring probability. The network is an area of two opposite contact processes, where nodes can be in two states, S or D. One of the processes is governed by the Sznajd dynamics: if there are two connected nodes in D-state, all their neighbors become D with probability p. For the opposite process it is sufficient to have only one neighbor in state S; this transition occurs with probability 1. The concentration of S-nodes changes abruptly at given value of the probability p. The result is that for small p, in clusterized networks the activation of S nodes prevails. This result is explained by a comparison of two limit cases: the Watts-Strogatz network without rewiring, where C=0.5, and the Bethe lattice where C=0.
THE MATHEMATICAL ANALYSIS OF A SIMPLE DUEL
The principles and techniques of simple Markov processes are used to analyze a simple duel to determine the limiting state probabilities (i.e., the...probabilities of occurrence of the various possible outcomes of the duel ). The duel is one in which A fires at B at a rate of r sub A shots per minute
Control and instanton trajectories for random transitions in turbulent flows
NASA Astrophysics Data System (ADS)
Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg
2011-12-01
Many turbulent systems exhibit random switches between qualitatively different attractors. The transition between these bistable states is often an extremely rare event, that can not be computed through DNS, due to complexity limitations. We present results for the calculation of instanton trajectories (a control problem) between non-equilibrium stationary states (attractors) in the 2D stochastic Navier-Stokes equations. By representing the transition probability between two states using a path integral formulation, we can compute the most probable trajectory (instanton) joining two non-equilibrium stationary states. Technically, this is equivalent to the minimization of an action, which can be related to a fluid mechanics control problem.
Dynamics of a Landau-Zener non-dissipative system with fluctuating energy levels
NASA Astrophysics Data System (ADS)
Fai, L. C.; Diffo, J. T.; Ateuafack, M. E.; Tchoffo, M.; Fouokeng, G. C.
2014-12-01
This paper considers a Landau-Zener (two-level) system influenced by a three-dimensional Gaussian and non-Gaussian coloured noise and finds a general form of the time dependent diabatic quantum bit (qubit) flip transition probabilities in the fast, intermediate and slow noise limits. The qubit flip probability is observed to mimic (for low-frequencies noise) that of the standard LZ problem. The qubit flip probability is also observed to be the measure of quantum coherence of states. The transition probability is observed to be tailored by non-Gaussian low-frequency noise and otherwise by Gaussian low-frequency coloured noise. Intermediate and fast noise limits are observed to alter the memory of the system in time and found to improve and control quantum information processing.
NASA Astrophysics Data System (ADS)
Romeu, João Gabriel Farias; Belinassi, Antonio Ricardo; Ornellas, Fernando R.
2018-05-01
A manifold of electronic states of ScS was investigated with special emphasis on the low-lying states X 2Σ+, A´ 2Δ, A 2Π, and B 2Σ+. For all states, potential energy curves were constructed covering internuclear distances from the equilibrium region through the dissociation limit. For the above states, besides providing the most accurate set of theoretical spectroscopic parameters to date, we have also computed dipole moment functions, transitions dipole moment functions, the associated radiative transition probabilities, and radiative lifetimes. For the states known experimentally, X 2Σ+, A 2Π, and B 2Σ+, our results significantly expand our present knowledge of the energetic profile of these states thus providing a new perspective for understanding the limited spectral data for this species known so far. For the new state, A´ 2Δ, yet unobserved experimentally, our results are sufficiently reliable and accurate to guide spectroscopists on further studies of this species.
The probability and severity of decompression sickness
Hada, Ethan A.; Vann, Richard D.; Denoble, Petar J.
2017-01-01
Decompression sickness (DCS), which is caused by inert gas bubbles in tissues, is an injury of concern for scuba divers, compressed air workers, astronauts, and aviators. Case reports for 3322 air and N2-O2 dives, resulting in 190 DCS events, were retrospectively analyzed and the outcomes were scored as (1) serious neurological, (2) cardiopulmonary, (3) mild neurological, (4) pain, (5) lymphatic or skin, and (6) constitutional or nonspecific manifestations. Following standard U.S. Navy medical definitions, the data were grouped into mild—Type I (manifestations 4–6)–and serious–Type II (manifestations 1–3). Additionally, we considered an alternative grouping of mild–Type A (manifestations 3–6)–and serious–Type B (manifestations 1 and 2). The current U.S. Navy guidance allows for a 2% probability of mild DCS and a 0.1% probability of serious DCS. We developed a hierarchical trinomial (3-state) probabilistic DCS model that simultaneously predicts the probability of mild and serious DCS given a dive exposure. Both the Type I/II and Type A/B discriminations of mild and serious DCS resulted in a highly significant (p << 0.01) improvement in trinomial model fit over the binomial (2-state) model. With the Type I/II definition, we found that the predicted probability of ‘mild’ DCS resulted in a longer allowable bottom time for the same 2% limit. However, for the 0.1% serious DCS limit, we found a vastly decreased allowable bottom dive time for all dive depths. If the Type A/B scoring was assigned to outcome severity, the no decompression limits (NDL) for air dives were still controlled by the acceptable serious DCS risk limit rather than the acceptable mild DCS risk limit. However, in this case, longer NDL limits were allowed than with the Type I/II scoring. The trinomial model mild and serious probabilities agree reasonably well with the current air NDL only with the Type A/B scoring and when 0.2% risk of serious DCS is allowed. PMID:28296928
NASA Astrophysics Data System (ADS)
Chung, Hye Won; Guha, Saikat; Zheng, Lizhong
2017-07-01
We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers—i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection—which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and reinterpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate M coherent states, each of which could now be a codeword, i.e., a sequence of N coherent states, each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.
Multiple-copy state discrimination: Thinking globally, acting locally
NASA Astrophysics Data System (ADS)
Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.
2011-05-01
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.
Multiple-copy state discrimination: Thinking globally, acting locally
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.
2011-05-15
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
NASA Astrophysics Data System (ADS)
Sheidaii, Mohammad Reza; TahamouliRoudsari, Mehrzad; Gordini, Mehrdad
2016-06-01
In knee braced frames, the braces are attached to the knee element rather than the intersection of beams and columns. This bracing system is widely used and preferred over the other commonly used systems for reasons such as having lateral stiffness while having adequate ductility, damage concentration on the second degree convenience of repairing and replacing of these elements after Earthquake. The lateral stiffness of this system is supplied by the bracing member and the ductility of the frame attached to the knee length is supplied through the bending or shear yield of the knee member. In this paper, the nonlinear seismic behavior of knee braced frame systems has been investigated using incremental dynamic analysis (IDA) and the effects of the number of stories in a building, length and the moment of inertia of the knee member on the seismic behavior, elastic stiffness, ductility and the probability of failure of these systems has been determined. In the incremental dynamic analysis, after plotting the IDA diagrams of the accelerograms, the collapse diagrams in the limit states are determined. These diagrams yield that for a constant knee length with reduced moment of inertia, the probability of collapse in limit states heightens and also for a constant knee moment of inertia with increasing length, the probability of collapse in limit states increases.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes
NASA Astrophysics Data System (ADS)
Zhai, Y.; Liu, J.; Liu, L.
2018-04-01
Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
28 CFR 33.23 - Limitations on fund use.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Assistance, Bureau of Justice Statistics, state or local agencies, and other public or private organizations, have been demonstrated to offer a low probability of improving the functioning of the criminal justice...
Tunnel ionization of highly excited atoms in a noncoherent laser radiation field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krainov, V.P.; Todirashku, S.S.
1982-10-01
A theory is developed of the ionization of highly excited atomic states by a low-frequency field of noncoherent laser radiation with a large number of modes. Analytic formulas are obtained for the probability of the tunnel ionization in such a field. An analysis is made of the case of the hydrogen atom when the parabolic quantum numbers are sufficiently good in the low-frequency limit, as well as of the case of highly excited states of complex atoms when these states are characterized by a definite orbital momentum and parity. It is concluded that the statistical factor representing the ratio ofmore » the probability in a stochastic field to the probability in a monochromatic field decreases, compared with the case of a short-range potential, if the ''Coulomb tail'' is included. It is shown that at a given field intensity the statistical factor decreases on increase in the principal quantum number of the state being ionized.« less
Entropy from State Probabilities: Hydration Entropy of Cations
2013-01-01
Entropy is an important energetic quantity determining the progression of chemical processes. We propose a new approach to obtain hydration entropy directly from probability density functions in state space. We demonstrate the validity of our approach for a series of cations in aqueous solution. Extensive validation of simulation results was performed. Our approach does not make prior assumptions about the shape of the potential energy landscape and is capable of calculating accurate hydration entropy values. Sampling times in the low nanosecond range are sufficient for the investigated ionic systems. Although the presented strategy is at the moment limited to systems for which a scalar order parameter can be derived, this is not a principal limitation of the method. The strategy presented is applicable to any chemical system where sufficient sampling of conformational space is accessible, for example, by computer simulations. PMID:23651109
Resource-efficient generation of linear cluster states by linear optics with postselection
Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; ...
2015-01-30
Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneousmore » detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2 n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4 m-1.« less
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Energy Distributions in Small Populations: Pascal versus Boltzmann
ERIC Educational Resources Information Center
Kugel, Roger W.; Weiner, Paul A.
2010-01-01
The theoretical distributions of a limited amount of energy among small numbers of particles with discrete, evenly-spaced quantum levels are examined systematically. The average populations of energy states reveal the pattern of Pascal's triangle. An exact formula for the probability that a particle will be in any given energy state is derived.…
Absorbing multicultural states in the Axelrod model
NASA Astrophysics Data System (ADS)
Vazquez, Federico; Redner, Sidney
2005-03-01
We determine the ultimate fate of a limit of the Axelrod model that consists of a population of leftists, centrists, and rightists. In an elemental interaction between agents, a centrist and a leftist can both become centrists or both become leftists with equal rates (similarly for a centrist and a rightist), but leftists and rightists do not interact. This interaction is applied repeatedly until the system can no longer evolve. The constraint between extremists can lead to a frustrated final state where the system consists of only leftists and rightists. In the mean field limit, we can view the evolution of the system as the motion of a random walk in the 3-dimensional space whose coordinates correspond to the density of each species. We find the exact final state probabilities and the time to reach consensus by solving for the first-passage probability of the random walk to the corresponding absorbing boundaries. The extension to a larger number of states will be discussed. This approach is a first step towards the analytic solution of Axelrod-like models.
Astrophysics: quark matter in compact stars?
Alford, M; Blaschke, D; Drago, A; Klähn, T; Pagliara, G; Schaffner-Bielich, J
2007-01-18
In a theoretical interpretation of observational data from the neutron star EXO 0748-676, Ozel concludes that quark matter probably does not exist in the centre of neutron stars. However, this conclusion is based on a limited set of possible equations of state for quark matter. Here we compare Ozel's observational limits with predictions based on a more comprehensive set of proposed quark-matter equations of state from the literature, and conclude that the presence of quark matter in EXO 0748-676 is not ruled out.
Insurance premiums and insurance coverage of near-poor children.
Hadley, Jack; Reschovsky, James D; Cunningham, Peter; Kenney, Genevieve; Dubay, Lisa
States increasingly are using premiums for near-poor children in their public insurance programs (Medicaid/SCHIP) to limit private insurance crowd-out and constrain program costs. Using national data from four rounds of the Community Tracking Study Household Surveys spanning the seven years from 1996 to 2003, this study estimates a multinomial logistic regression model examining how public and private insurance premiums affect insurance coverage outcomes (Medicaid/SCHIP coverage, private coverage, and no coverage). Higher public premiums are significantly associated with a lower probability of public coverage and higher probabilities of private coverage and uninsurance; higher private premiums are significantly related to a lower probability of private coverage and higher probabilities of public coverage and uninsurance. The results imply that uninsurance rates will rise if both public and private premiums increase, and suggest that states that impose or increase public insurance premiums for near-poor children will succeed in discouraging crowd-out of private insurance, but at the expense of higher rates of uninsurance. Sustained increases in private insurance premiums will continue to create enrollment pressures on state insurance programs for children.
NASA Astrophysics Data System (ADS)
Beheshti Aval, Seyed Bahram; Kouhestani, Hamed Sadegh; Mottaghi, Lida
2017-07-01
This study investigates the efficiency of two types of rehabilitation methods based on economic justification that can lead to logical decision making between the retrofitting schemes. Among various rehabilitation methods, concentric chevron bracing (CCB) and cylindrical friction damper (CFD) were selected. The performance assessment procedure of the frames is divided into two distinct phases. First, the limit state probabilities of the structures before and after rehabilitation are investigated. In the second phase, the seismic risk of structures in terms of life safety and financial losses (decision variables) using the recently published FEMA P58 methodology is evaluated. The results show that the proposed retrofitting methods improve the serviceability and life safety performance levels of steel and RC structures at different rates when subjected to earthquake loads. Moreover, these procedures reveal that financial losses are greatly decreased, and were more tangible by the application of CFD rather than using CCB. Although using both retrofitting methods reduced damage state probabilities, incorporation of a site-specific seismic hazard curve to evaluate mean annual occurrence frequency at the collapse prevention limit state caused unexpected results to be obtained. Contrary to CFD, the collapse probability of the structures retrofitted with CCB increased when compared with the primary structures.
Most probable mixing state of aerosols in Delhi NCR, northern India
NASA Astrophysics Data System (ADS)
Srivastava, Parul; Dey, Sagnik; Srivastava, Atul Kumar; Singh, Sachchidanand; Tiwari, Suresh
2018-02-01
Unknown mixing state is one of the major sources of uncertainty in estimating aerosol direct radiative forcing (DRF). Aerosol DRF in India is usually reported for external mixing and any deviation from this would lead to high bias and error. Limited information on aerosol composition hinders in resolving this issue in India. Here we use two years of aerosol chemical composition data measured at megacity Delhi to examine the most probable aerosol mixing state by comparing the simulated clear-sky downward surface flux with the measured flux. We consider external, internal, and four combinations of core-shell (black carbon, BC over dust; water-soluble, WS over dust; WS over water-insoluble, WINS and BC over WINS) mixing. Our analysis reveals that choice of external mixing (usually considered in satellite retrievals and climate models) seems reasonable in Delhi only in the pre-monsoon (Mar-Jun) season. During the winter (Dec-Feb) and monsoon (Jul-Sep) seasons, 'WS coating over dust' externally mixed with BC and WINS appears to be the most probable mixing state; while 'WS coating over WINS' externally mixed with BC and dust seems to be the most probable mixing state in the post-monsoon (Oct-Nov) season. Mean seasonal TOA (surface) aerosol DRF for the most probable mixing states are 4.4 ± 3.9 (- 25.9 ± 3.9), - 16.3 ± 5.7 (- 42.4 ± 10.5), 13.6 ± 11.4 (- 76.6 ± 16.6) and - 5.4 ± 7.7 (- 80.0 ± 7.2) W m- 2 respectively in the pre-monsoon, monsoon, post-monsoon and winter seasons. Our results highlight the importance of realistic mixing state treatment in estimating aerosol DRF to aid in policy making to combat climate change.
Markov chains: computing limit existence and approximations with DNA.
Cardona, M; Colomer, M A; Conde, J; Miret, J M; Miró, J; Zaragoza, A
2005-09-01
We present two algorithms to perform computations over Markov chains. The first one determines whether the sequence of powers of the transition matrix of a Markov chain converges or not to a limit matrix. If it does converge, the second algorithm enables us to estimate this limit. The combination of these algorithms allows the computation of a limit using DNA computing. In this sense, we have encoded the states and the transition probabilities using strands of DNA for generating paths of the Markov chain.
Disordered configurations of the Glauber model in two-dimensional networks
NASA Astrophysics Data System (ADS)
Bačić, Iva; Franović, Igor; Perc, Matjaž
2017-12-01
We analyze the ordering efficiency and the structure of disordered configurations for the zero-temperature Glauber model on Watts-Strogatz networks obtained by rewiring 2D regular square lattices. In the small-world regime, the dynamics fails to reach the ordered state in the thermodynamic limit. Due to the interplay of the perturbed regular topology and the energy neutral stochastic state transitions, the stationary state consists of two intertwined domains, manifested as multiclustered states on the original lattice. Moreover, for intermediate rewiring probabilities, one finds an additional source of disorder due to the low connectivity degree, which gives rise to small isolated droplets of spins. We also examine the ordering process in paradigmatic two-layer networks with heterogeneous rewiring probabilities. Comparing the cases of a multiplex network and the corresponding network with random inter-layer connectivity, we demonstrate that the character of the final state qualitatively depends on the type of inter-layer connections.
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
Estimating transition probabilities in unmarked populations --entropy revisited
Cooch, E.G.; Link, W.A.
1999-01-01
The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.
Distribution of G concurrence of random pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappellini, Valerio; Sommers, Hans-Juergen; Zyczkowski, Karol
2006-12-15
The average entanglement of random pure states of an NxN composite system is analyzed. We compute the average value of the determinant D of the reduced state, which forms an entanglement monotone. Calculating higher moments of the determinant, we characterize the probability distribution P(D). Similar results are obtained for the rescaled Nth root of the determinant, called the G concurrence. We show that in the limit N{yields}{infinity} this quantity becomes concentrated at a single point G{sub *}=1/e. The position of the concentration point changes if one consider an arbitrary NxK bipartite system, in the joint limit N,K{yields}{infinity}, with K/N fixed.
Quantum displacement receiver for M-ary phase-shift-keyed coherent states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izumi, Shuro; Takeoka, Masahiro; Fujiwara, Mikio
2014-12-04
We propose quantum receivers for 3- and 4-ary phase-shift-keyed (PSK) coherent state signals to overcome the standard quantum limit (SQL). Our receiver, consisting of a displacement operation and on-off detectors with or without feedforward, provides an error probability performance beyond the SQL. We show feedforward operations can tolerate the requirement for the detector specifications.
Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model
NASA Astrophysics Data System (ADS)
Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.
2018-04-01
The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.
Unambiguous quantum-state filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; CREST, Japan Science and Technology Corporation, Tokyo,
2003-07-01
In this paper, we consider a generalized measurement where one particular quantum signal is unambiguously extracted from a set of noncommutative quantum signals and the other signals are filtered out. Simple expressions for the maximum detection probability and its positive operator valued measure are derived. We apply such unambiguous quantum state filtering to evaluation of the sensing of decoherence channels. The bounds of the precision limit for a given quantum state of probes and possible device implementations are discussed.
The United States should forego a damage-limitation capability against China
NASA Astrophysics Data System (ADS)
Glaser, Charles L.
2017-11-01
Bottom Lines • THE KEY STRATEGIC NUCLEAR CHOICE. Whether to attempt to preserve its damage-limitation capability against China is the key strategic nuclear choice facing the United States. The answer is much less clear-cut than when the United States faced the Soviet Union during the Cold War. • FEASIBILITY OF DAMAGE LIMITATION. Although technology has advanced significantly over the past three decades, future military competition between the U.S. and Chinese forces will favor large-scale nuclear retaliation over significant damage limitation. • BENEFITS AND RISKS OF A DAMAGE-LIMITATION CAPABILITY. The benefits provided by a modest damage-limitation capability would be small, because the United States can meet its most important regional deterrent requirements without one. In comparison, the risks, which include an increased probability of accidental and unauthorized Chinese attacks, as well as strained U.S.—China relations, would be large. • FOREGO DAMAGE LIMITATION. These twin findings—the poor prospects for prevailing in the military competition, and the small benefits and likely overall decrease in U.S. security—call for a U.S. policy that foregoes efforts to preserve or enhance its damage-limitation capability.
NASA Astrophysics Data System (ADS)
Lukashov, S. S.; Poretsky, S. A.; Pravilov, A. M.; Khadikova, E. I.; Shevchenko, E. V.
2010-10-01
The first results of measurements and analysis of excitation spectra of the λlum = 3250 Å luminescence corresponding to I2( D0{/u +} → X0{/g +}) transition as well as luminescence at λlum = 3400 Å, where I2( D'2 g → A'2 u and/or β1 g → A1 u ) transitions occur, observed after three-step, λ1 + λ f + λ1, λ1 = 5321-5508.2 Å, λ f = 10644.0 Å, laser excitation of pure iodine vapour and I2 + Xe mixtures at room temperature via MI2 vdW complexes, M = I2, Xe, of the I2(0{/g +}, 1 u ( bb)) valence states correlating with the third, I(2 P 1/2) + I(2 P 1/2) (I2( bb)), dissociation limit are presented. Luminescence spectra in the λlum = 2200-3500 Å spectral range are also analyzed. Strong luminescence from the I2(D) and, probably, I2(D' and β) states is observed. We discuss three alternative mechanisms of optical population of the IP state. In our opinion, the mechanism including the MI2 complexes is the most probable.
Enhanced Reverse Saturable Absorption and Optical Limiting in Heavy-Atom Substituted Phthalocyanines
NASA Technical Reports Server (NTRS)
Perry, J. W.; Mansour, K.; Marder, S. R.; Alvarez, D., Jr.; Perry, K. J.; Choong, I.
1994-01-01
The reverse saturable absorption and optical limiting response of metal phthalocyaninies can be enhanced by using the heavy-atom effect. Phthalocyanines containing heavy metal atoms, such as In, Sn, and Pb show nearly a factor of two enhancement in the ratio of effective excited-state to ground-state absorption cross sections compared to those containing lighter atoms, such as Al and Si. In an f/8 optical geometry, homogeneous solutions of heavy metal phthalocyanines, at 30% linear transmission, limit 8-ns, 532-nm laser pulses to less than or equal to 3 (micro)J (the energy for 50% probability of eye damage) for incident pulses up to 800 (micro)J.
Quantum return probability of a system of N non-interacting lattice fermions
NASA Astrophysics Data System (ADS)
Krapivsky, P. L.; Luck, J. M.; Mallick, K.
2018-02-01
We consider N non-interacting fermions performing continuous-time quantum walks on a one-dimensional lattice. The system is launched from a most compact configuration where the fermions occupy neighboring sites. We calculate exactly the quantum return probability (sometimes referred to as the Loschmidt echo) of observing the very same compact state at a later time t. Remarkably, this probability depends on the parity of the fermion number—it decays as a power of time for even N, while for odd N it exhibits periodic oscillations modulated by a decaying power law. The exponent also slightly depends on the parity of N, and is roughly twice smaller than what it would be in the continuum limit. We also consider the same problem, and obtain similar results, in the presence of an impenetrable wall at the origin constraining the particles to remain on the positive half-line. We derive closed-form expressions for the amplitudes of the power-law decay of the return probability in all cases. The key point in the derivation is the use of Mehta integrals, which are limiting cases of the Selberg integral.
Just War Theory Applied to US Policy in Pakistan and Yemen
2014-05-12
cause, proper authority, right intention, last resort, probability of success, proportionality, discrimination . 16. SECURITY CLASSIFICATION OF: 17...21 Discrimination ...theater of conflict opened up. Whether or not the United States made wise decisions in handicapping itself by limiting offensive operations to airstrikes
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
NASA Astrophysics Data System (ADS)
Xing, Wei; Shi, Deheng; Zhang, Jicai; Sun, Jinfeng; Zhu, Zunlue
2018-05-01
This paper calculates the potential energy curves of 21 Λ-S and 42 Ω states, which arise from the first two dissociation asymptotes of the CO+ cation. The calculations are conducted using the complete active space self-consistent field method, which is followed by the valence internally contracted multireference configuration interaction approach with the Davidson correction. To improve the reliability and accuracy of the potential energy curves, core-valence correlation and scalar relativistic corrections, as well as the extrapolation of potential energies to the complete basis set limit are taken into account. The spectroscopic parameters and vibrational levels are determined. The spin-orbit coupling effect on the spectroscopic parameters and vibrational levels is evaluated. To better study the transition probabilities, the transition dipole moments are computed. The Franck-Condon factors and Einstein coefficients of some emissions are calculated. The radiative lifetimes are determined for a number of vibrational levels of several states. The transitions between different Λ-S states are evaluated. Spectroscopic routines for observing these states are proposed. The spectroscopic parameters, vibrational levels, transition dipole moments, and transition probabilities reported in this paper can be considered to be very reliable and can be used as guidelines for detecting these states in an appropriate spectroscopy experiment, especially for the states that were very difficult to observe or were not detected in previous experiments.
Coherent state amplification using frequency conversion and a single photon source
NASA Astrophysics Data System (ADS)
Kasture, Sachin
2017-11-01
Quantum state discrimination lies at the heart of quantum communication and quantum cryptography protocols. Quantum Key Distribution (QKD) using coherent states and homodyne detection has been shown to be a feasible method for quantum communication over long distances. However, this method is still limited because of optical losses. Noiseless coherent state amplification has been proposed as a way to overcome this. Photon addition using stimulated Spontaneous Parametric Down-conversion followed by photon subtraction has been used as a way to implement amplification. However, this process occurs with very low probability which makes it very difficult to implement cascaded stages of amplification due to dark count probability in the single photon detectors used to herald the addition and subtraction of single photons. We discuss a scheme using the χ (2) and χ (3) optical non-linearity and frequency conversion (sum and difference frequency generation) along with a single photon source to implement photon addition. Unlike the photon addition scheme using SPDC, this scheme allows us to tune the success probability at the cost of reduced amplification. The photon statistics of the converted field can be controlled using the power of the pump field and the interaction time.
Quantum interference of position and momentum: A particle propagation paradox
NASA Astrophysics Data System (ADS)
Hofmann, Holger F.
2017-08-01
Optimal simultaneous control of position and momentum can be achieved by maximizing the probabilities of finding their experimentally observed values within two well-defined intervals. The assumption that particles move along straight lines in free space can then be tested by deriving a lower limit for the probability of finding the particle in a corresponding spatial interval at any intermediate time t . Here, it is shown that this lower limit can be violated by quantum superpositions of states confined within the respective position and momentum intervals. These violations of the particle propagation inequality show that quantum mechanics changes the laws of motion at a fundamental level, providing a different perspective on causality relations and time evolution in quantum mechanics.
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and cognition of ambiguous figures.
NASA Astrophysics Data System (ADS)
Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.
2018-02-01
Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.
Metric on the space of quantum states from relative entropy. Tomographic reconstruction
NASA Astrophysics Data System (ADS)
Man'ko, Vladimir I.; Marmo, Giuseppe; Ventriglia, Franco; Vitale, Patrizia
2017-08-01
In the framework of quantum information geometry, we derive, from quantum relative Tsallis entropy, a family of quantum metrics on the space of full rank, N level quantum states, by means of a suitably defined coordinate free differential calculus. The cases N=2, N=3 are discussed in detail and notable limits are analyzed. The radial limit procedure has been used to recover quantum metrics for lower rank states, such as pure states. By using the tomographic picture of quantum mechanics we have obtained the Fisher-Rao metric for the space of quantum tomograms and derived a reconstruction formula of the quantum metric of density states out of the tomographic one. A new inequality obtained for probabilities of three spin-1/2 projections in three perpendicular directions is proposed to be checked in experiments with superconducting circuits.
NASA Astrophysics Data System (ADS)
Denning, Emil V.; Iles-Smith, Jake; McCutcheon, Dara P. S.; Mork, Jesper
2017-12-01
Multiphoton entangled states are a crucial resource for many applications in quantum information science. Semiconductor quantum dots offer a promising route to generate such states by mediating photon-photon correlations via a confined electron spin, but dephasing caused by the host nuclear spin environment typically limits coherence (and hence entanglement) between photons to the spin T2* time of a few nanoseconds. We propose a protocol for the deterministic generation of multiphoton entangled states that is inherently robust against the dominating slow nuclear spin environment fluctuations, meaning that coherence and entanglement is instead limited only by the much longer spin T2 time of microseconds. Unlike previous protocols, the present scheme allows for the generation of very low error probability polarization encoded three-photon GHZ states and larger entangled states, without the need for spin echo or nuclear spin calming techniques.
Statistical analysis of general aviation VG-VGH data
NASA Technical Reports Server (NTRS)
Clay, L. E.; Dickey, R. L.; Moran, M. S.; Payauys, K. W.; Severyn, T. P.
1974-01-01
To represent the loads spectra of general aviation aircraft operating in the Continental United States, VG and VGH data collected since 1963 in eight operational categories were processed and analyzed. Adequacy of data sample and current operational categories, and parameter distributions required for valid data extrapolation were studied along with envelopes of equal probability of exceeding the normal load factor (n sub z) versus airspeed for gust and maneuver loads and the probability of exceeding current design maneuver, gust, and landing impact n sub z limits. The significant findings are included.
NASA Astrophysics Data System (ADS)
Datta, Nilanjana; Pautrat, Yan; Rouzé, Cambyse
2016-06-01
Quantum Stein's lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ⊗n or σ⊗n) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability αn of erroneously inferring the state to be σ, the probability βn of erroneously inferring the state to be ρ decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.
Can we expect to predict climate if we cannot shadow weather?
NASA Astrophysics Data System (ADS)
Smith, Leonard
2010-05-01
What limits our ability to predict (or project) useful statistics of future climate? And how might we quantify those limits? In the early 1960s, Ed Lorenz illustrated one constraint on point forecasts of the weather (chaos) while noting another (model imperfections). In the mid-sixties he went on to discuss climate prediction, noting that chaos, per se, need not limit accurate forecasts of averages and the distributions that define climate. In short, chaos might place draconian limits on what we can say about a particular summer day in 2010 (or 2040), but it need not limit our ability to make accurate and informative statements about the weather over this summer as a whole, or climate distributions of the 2040's. If not chaos, what limits our ability to produce decision relevant probability distribution functions (PDFs)? Is this just a question of technology (raw computer power) and uncertain boundary conditions (emission scenarios)? Arguably, current model simulations of the Earth's climate are limited by model inadequacy: not that the initial or boundary conditions are unknown but that state-of-the-art models would not yield decision-relevant probability distributions even if they were known. Or to place this statement in an empirically falsifiable format: that in 2100 when the boundary conditions are known and computer power is (hopefully) sufficient to allow exhaustive exploration of today's state-of-the-art models: we will find today's models do not admit a trajectory consistent with our knowledge of the state of the earth in 2009 which would prove of decision support relevance for, say, 25 km, hourly resolution. In short: today's models cannot shadow the weather of this century even after the fact. Restating this conjecture in a more positive frame: a 2100 historian of science will be able to determine the highest space and time scales on which 2009 models could have (i) produced trajectories plausibly consistent with the (by then) observed twenty-first century and (ii) produced probability distributions useful as such for decision support. As it will be some time until such conjectures can be refuted, how might we best advise decision makers of the detail (specifically, space and time resolution of a quantity of interest as a function of lead-time) that it is rational to interpret model-based PDFs as decision-relevant probability distributions? Given the nonlinearities already incorporated in our models, how far into the future can one expect a simulation to get the temperature "right" given the simulation has precipitation badly "wrong"? When can biases in local temperature which melt model-ice no longer be dismissed, and neglected by presenting model-anomalies? At what lead times will feedbacks due to model inadequacies cause the 2007 model simulations to drift away from what today's basic science (and 2100 computer power) would suggest? How might one justify quantitative claims regarding "extreme events" (or NUMB weather)? Models are unlikely to forecast things they cannot shadow, or at least track. There is no constraint on rational scientists to take model distributions as their subjective probabilities, unless they believe the model is empirically adequate. How then are we to use today's simulations to inform today's decisions? Two approaches are considered. The first augments the model-based PDF with an explicit subjective-probability of a "Big Surprise". The second is to look not for a PDF but, following Solvency II, consider the risk from any event that cannot be ruled out at, say, the one in 200 level. The fact that neither approach provides the simplicity and apparent confidence of interpreting model-based PDFs as if they were objective probabilities does not contradict the claim that either might lead to better decision-making.
Ward, M H; Prince, J R; Stewart, P A; Zahm, S H
2001-11-01
Migrant and seasonal farmworkers are exposed to pesticides through their work with crops and livestock. Because workers are usually unaware of the pesticides applied, specific pesticide exposures cannot be determined by interviews. We conducted a study to determine the feasibility of identifying probable pesticide exposures based on work histories. The study included 162 farm workers in seven states. Interviewers obtained a lifetime work history including the crops, tasks, months, and locations worked. We investigated the availability of survey data on pesticide use for crops and livestock in the seven pilot states. Probabilities of use for pesticide types (herbicides, insecticides, fungicides, etc.) and specific chemicals were calculated from the available data for two farm workers. The work histories were chosen to illustrate how the quality of the pesticide use information varied across crops, states, and years. For most vegetable and fruit crops there were regional pesticide use data in the late 1970s, no data in the 1980s, and state-specific data every other year in the 1990s. Annual use surveys for cotton and potatoes began in the late 1980s. For a few crops, including asparagus, broccoli, lettuce, strawberries, plums, and Christmas trees, there were no federal data or data from the seven states before the 1990s. We conclude that identifying probable pesticide exposures is feasible in some locations. However, the lack of pesticide use data before the 1990s for many crops will limit the quality of historic exposure assessment for most workers. Published 2001 Wiley-Liss, Inc.
Distortion outage minimization in Nakagami fading using limited feedback
NASA Astrophysics Data System (ADS)
Wang, Chih-Hong; Dey, Subhrakanti
2011-12-01
We focus on a decentralized estimation problem via a clustered wireless sensor network measuring a random Gaussian source where the clusterheads amplify and forward their received signals (from the intra-cluster sensors) over orthogonal independent stationary Nakagami fading channels to a remote fusion center that reconstructs an estimate of the original source. The objective of this paper is to design clusterhead transmit power allocation policies to minimize the distortion outage probability at the fusion center, subject to an expected sum transmit power constraint. In the case when full channel state information (CSI) is available at the clusterhead transmitters, the optimization problem can be shown to be convex and is solved exactly. When only rate-limited channel feedback is available, we design a number of computationally efficient sub-optimal power allocation algorithms to solve the associated non-convex optimization problem. We also derive an approximation for the diversity order of the distortion outage probability in the limit when the average transmission power goes to infinity. Numerical results illustrate that the sub-optimal power allocation algorithms perform very well and can close the outage probability gap between the constant power allocation (no CSI) and full CSI-based optimal power allocation with only 3-4 bits of channel feedback.
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
NASA Astrophysics Data System (ADS)
Gouissem, A.; Hamila, R.; Al-Dhahir, N.; Foufou, S.
2016-12-01
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte Carlo simulations.
Coherent manipulation of a solid-state artificial atom with few photons.
Giesz, V; Somaschi, N; Hornecker, G; Grange, T; Reznychenko, B; De Santis, L; Demory, J; Gomez, C; Sagnes, I; Lemaître, A; Krebs, O; Lanzillotti-Kimura, N D; Lanco, L; Auffeves, A; Senellart, P
2016-06-17
In a quantum network based on atoms and photons, a single atom should control the photon state and, reciprocally, a single photon should allow the coherent manipulation of the atom. Both operations require controlling the atom environment and developing efficient atom-photon interfaces, for instance by coupling the natural or artificial atom to cavities. So far, much attention has been drown on manipulating the light field with atomic transitions, recently at the few-photon limit. Here we report on the reciprocal operation and demonstrate the coherent manipulation of an artificial atom by few photons. We study a quantum dot-cavity system with a record cooperativity of 13. Incident photons interact with the atom with probability 0.95, which radiates back in the cavity mode with probability 0.96. Inversion of the atomic transition is achieved for 3.8 photons on average, showing that our artificial atom performs as if fully isolated from the solid-state environment.
Application of confocal laser microscopy for monitoring mesh implants in herniology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakharov, V P; Belokonev, V I; Bratchenko, I A
2011-04-30
The state of the surface of mesh implants and their encapsulation region in herniology is investigated by laser confocal microscopy. A correlation between the probability of developing relapses and the size and density of implant microdefects is experimentally shown. The applicability limits of differential reverse scattering for monitoring the post-operation state of implant and adjacent tissues are established based on model numerical experiments. (optical technologies in biophysics and medicine)
The impact of land ownership, firefighting, and reserve status on fire probability in California
NASA Astrophysics Data System (ADS)
Starrs, Carlin Frances; Butsic, Van; Stephens, Connor; Stewart, William
2018-03-01
The extent of wildfires in the western United States is increasing, but how land ownership, firefighting, and reserve status influence fire probability is unclear. California serves as a unique natural experiment to estimate the impact of these factors, as ownership is split equally between federal and non-federal landowners; there is a relatively large proportion of reserved lands where extractive uses are prohibited and fire suppression is limited; and land ownership and firefighting responsibility are purposefully not always aligned. Panel Poisson regression techniques and pre-regression matching were used to model changes in annual fire probability from 1950-2015 on reserve and non-reserve lands on federal and non-federal ownerships across four vegetation types: forests, rangelands, shrublands, and forests without commercial species. Fire probability was found to have increased over time across all 32 categories. A marginal effects analysis showed that federal ownership and firefighting was associated with increased fire probability, and that the difference in fire probability on federal versus non-federal lands is increasing over time. Ownership, firefighting, and reserve status, played roughly equal roles in determining fire probability, and were found to have much greater influence than average maximum temperature (°C) during summer months (June, July, August), average annual precipitation (cm), and average annual topsoil moisture content by volume, demonstrating the critical role these factors play in western fire regimes and the importance of including them in future analysis focused on understanding and predicting wildfire in the Western United States.
Gravity and count probabilities in an expanding universe
NASA Technical Reports Server (NTRS)
Bouchet, Francois R.; Hernquist, Lars
1992-01-01
The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.
Transition Dipole Moments and Transition Probabilities of the CN Radical
NASA Astrophysics Data System (ADS)
Yin, Yuan; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue
2018-04-01
This paper studies the transition probabilities of electric dipole transitions between 10 low-lying states of the CN radical. These states are X2Σ+, A2Π, B2Σ+, a4Σ+, b4Π, 14Σ‑, 24Π, 14Δ, 16Σ+, and 16Π. The potential energy curves are calculated using the CASSCF method, which is followed by the icMRCI approach with the Davidson correction. The transition dipole moments between different states are calculated. To improve the accuracy of potential energy curves, core–valence correlation and scalar relativistic corrections, as well as the extrapolation of potential energies to the complete basis set limit are included. The Franck–Condon factors and Einstein coefficients of emissions are calculated. The radiative lifetimes are determined for the vibrational levels of the A2Π, B2Σ+, b4Π, 14Σ‑, 24Π, 14Δ, and 16Π states. According to the transition probabilities and radiative lifetimes, some guidelines for detecting these states spectroscopically are proposed. The spin–orbit coupling effect on the spectroscopic and vibrational properties is evaluated. The splitting energy in the A2Π state is determined to be 50.99 cm‑1, which compares well with the experimental ones. The potential energy curves, transition dipole moments, spectroscopic parameters, and transition probabilities reported in this paper can be considered to be very reliable. The results obtained here can be used as guidelines for detecting these transitions, in particular those that have not been measured in previous experiments or have not been observed in the Sun, comets, stellar atmospheres, dark interstellar clouds, and diffuse interstellar clouds.
The Specific Features of design and process engineering in branch of industrial enterprise
NASA Astrophysics Data System (ADS)
Sosedko, V. V.; Yanishevskaya, A. G.
2017-06-01
Production output of industrial enterprise is organized in debugged working mechanisms at each stage of product’s life cycle from initial design documentation to product and finishing it with utilization. The topic of article is mathematical model of the system design and process engineering in branch of the industrial enterprise, statistical processing of estimated implementation results of developed mathematical model in branch, and demonstration of advantages at application at this enterprise. During the creation of model a data flow about driving of information, orders, details and modules in branch of enterprise groups of divisions were classified. Proceeding from the analysis of divisions activity, a data flow, details and documents the state graph of design and process engineering was constructed, transitions were described and coefficients are appropriated. To each condition of system of the constructed state graph the corresponding limiting state probabilities were defined, and also Kolmogorov’s equations are worked out. When integration of sets of equations of Kolmogorov the state probability of system activity the specified divisions and production as function of time in each instant is defined. On the basis of developed mathematical model of uniform system of designing and process engineering and manufacture, and a state graph by authors statistical processing the application of mathematical model results was carried out, and also advantage at application at this enterprise is shown. Researches on studying of loading services probability of branch and third-party contractors (the orders received from branch within a month) were conducted. The developed mathematical model of system design and process engineering and manufacture can be applied to definition of activity state probability of divisions and manufacture as function of time in each instant that will allow to keep account of loading of performance of work in branches of the enterprise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xin-Ping, E-mail: xuxp@mail.ihep.ac.cn; Ide, Yusuke
In the literature, there are numerous studies of one-dimensional discrete-time quantum walks (DTQWs) using a moving shift operator. However, there is no exact solution for the limiting probability distributions of DTQWs on cycles using a general coin or swapping shift operator. In this paper, we derive exact solutions for the limiting probability distribution of quantum walks using a general coin and swapping shift operator on cycles for the first time. Based on the exact solutions, we show how to generate symmetric quantum walks and determine the condition under which a symmetric quantum walk appears. Our results suggest that choosing various coinmore » and initial state parameters can achieve a symmetric quantum walk. By defining a quantity to measure the variation of symmetry, deviation and mixing time of symmetric quantum walks are also investigated.« less
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
Dynamics in atomic signaling games.
Fox, Michael J; Touri, Behrouz; Shamma, Jeff S
2015-07-07
We study an atomic signaling game under stochastic evolutionary dynamics. There are a finite number of players who repeatedly update from a finite number of available languages/signaling strategies. Players imitate the most fit agents with high probability or mutate with low probability. We analyze the long-run distribution of states and show that, for sufficiently small mutation probability, its support is limited to efficient communication systems. We find that this behavior is insensitive to the particular choice of evolutionary dynamic, a property that is due to the game having a potential structure with a potential function corresponding to average fitness. Consequently, the model supports conclusions similar to those found in the literature on language competition. That is, we show that efficient languages eventually predominate the society while reproducing the empirical phenomenon of linguistic drift. The emergence of efficiency in the atomic case can be contrasted with results for non-atomic signaling games that establish the non-negligible possibility of convergence, under replicator dynamics, to states of unbounded efficiency loss. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extinction time of a stochastic predator-prey model by the generalized cell mapping method
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao
2018-03-01
The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Mey, Antonia S. J. S.; Noé, Frank
2014-12-07
We propose a discrete transition-based reweighting analysis method (dTRAM) for analyzing configuration-space-discretized simulation trajectories produced at different thermodynamic states (temperatures, Hamiltonians, etc.) dTRAM provides maximum-likelihood estimates of stationary quantities (probabilities, free energies, expectation values) at any thermodynamic state. In contrast to the weighted histogram analysis method (WHAM), dTRAM does not require data to be sampled from global equilibrium, and can thus produce superior estimates for enhanced sampling data such as parallel/simulated tempering, replica exchange, umbrella sampling, or metadynamics. In addition, dTRAM provides optimal estimates of Markov state models (MSMs) from the discretized state-space trajectories at all thermodynamic states. Under suitablemore » conditions, these MSMs can be used to calculate kinetic quantities (e.g., rates, timescales). In the limit of a single thermodynamic state, dTRAM estimates a maximum likelihood reversible MSM, while in the limit of uncorrelated sampling data, dTRAM is identical to WHAM. dTRAM is thus a generalization to both estimators.« less
NASA Astrophysics Data System (ADS)
Yin, Yuan; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue
2018-03-01
This work calculates the potential energy curves of 9 Λ-S and 28 Ω states of the NCl+ cation. The technique employed is the complete active space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with the Davidson correction. The Λ-S states are X2Π, 12Σ+, 14Π, 14Σ+, 14Σ-, 24Π, 14Δ, 16Σ+, and 16Π, which are yielded from the first two dissociation channels of NCl+ cation. The Ω states are generated from these Λ-S states. The 14Π, 14Δ, 16Σ+, and 16Π states are inverted with the spin-orbit coupling effect included. The 14Σ+, 16Σ+, and 16Π states are very weakly bound, whose well depths are only several-hundred cm- 1. One avoided crossing of PECs occurs between the 12Σ+ and 22Σ+ states. To improve the quality of potential energy curves, core-valence correlation and scalar relativistic corrections are included. The potential energies are extrapolated to the complete basis set limit. The spectroscopic parameters and vibrational levels are calculated. The transition dipole moments are computed. The Franck-Condon factors, Einstein coefficients, and radiative lifetimes of many transitions are determined. The spectroscopic approaches are proposed for observing these states according to the transition probabilities. The spin-orbit coupling effect on the spectroscopic and vibrational properties is evaluated. The spectroscopic parameters, vibrational levels, transition dipole moments, as well as transition probabilities reported in this paper could be considered to be very reliable.
Interrelated structure of high altitude atmospheric profiles
NASA Technical Reports Server (NTRS)
Engler, N. A.; Goldschmidt, M. A.
1972-01-01
A preliminary development of a mathematical model to compute probabilities of thermodynamic profiles is presented. The model assumes an exponential expression for pressure and utilizes the hydrostatic law and equation of state in the determination of density and temperature. It is shown that each thermodynamic variable can be factored into the produce of steady state and perturbation functions. The steady state functions have profiles similar to those of the 1962 standard atmosphere while the perturbation functions oscillate about 1. Limitations of the model and recommendations for future work are presented.
Seaton, Sarah E; Manktelow, Bradley N
2012-07-16
Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.
Enforcing positivity in intrusive PC-UQ methods for reactive ODE systems
Najm, Habib N.; Valorani, Mauro
2014-04-12
We explore the relation between the development of a non-negligible probability of negative states and the instability of numerical integration of the intrusive Galerkin ordinary differential equation system describing uncertain chemical ignition. To prevent this instability without resorting to either multi-element local polynomial chaos (PC) methods or increasing the order of the PC representation in time, we propose a procedure aimed at modifying the amplitude of the PC modes to bring the probability of negative state values below a user-defined threshold. This modification can be effectively described as a filtering procedure of the spectral PC coefficients, which is applied on-the-flymore » during the numerical integration when the current value of the probability of negative states exceeds the prescribed threshold. We demonstrate the filtering procedure using a simple model of an ignition process in a batch reactor. This is carried out by comparing different observables and error measures as obtained by non-intrusive Monte Carlo and Gauss-quadrature integration and the filtered intrusive procedure. Lastly, the filtering procedure has been shown to effectively stabilize divergent intrusive solutions, and also to improve the accuracy of stable intrusive solutions which are close to the stability limits.« less
NASA Astrophysics Data System (ADS)
Ovchinnikov, Igor V.; Schwartz, Robert N.; Wang, Kang L.
2016-03-01
The concept of deterministic dynamical chaos has a long history and is well established by now. Nevertheless, its field theoretic essence and its stochastic generalization have been revealed only very recently. Within the newly found supersymmetric theory of stochastics (STS), all stochastic differential equations (SDEs) possess topological or de Rahm supersymmetry and stochastic chaos is the phenomenon of its spontaneous breakdown. Even though the STS is free of approximations and thus is technically solid, it is still missing a firm interpretational basis in order to be physically sound. Here, we make a few important steps toward the construction of the interpretational foundation for the STS. In particular, we discuss that one way to understand why the ground states of chaotic SDEs are conditional (not total) probability distributions, is that some of the variables have infinite memory of initial conditions and thus are not “thermalized”, i.e., cannot be described by the initial-conditions-independent probability distributions. As a result, the definitive assumption of physical statistics that the ground state is a steady-state total probability distribution is not valid for chaotic SDEs.
Applications of conformal field theory to problems in 2D percolation
NASA Astrophysics Data System (ADS)
Simmons, Jacob Joseph Harris
This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.
NASA Astrophysics Data System (ADS)
Matsunaga, Y.; Sugita, Y.
2018-06-01
A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.
Failure detection system risk reduction assessment
NASA Technical Reports Server (NTRS)
Aguilar, Robert B. (Inventor); Huang, Zhaofeng (Inventor)
2012-01-01
A process includes determining a probability of a failure mode of a system being analyzed reaching a failure limit as a function of time to failure limit, determining a probability of a mitigation of the failure mode as a function of a time to failure limit, and quantifying a risk reduction based on the probability of the failure mode reaching the failure limit and the probability of the mitigation.
NASA Astrophysics Data System (ADS)
Gao, Haixia; Li, Ting; Xiao, Changming
2016-05-01
When a simple system is in its nonequilibrium state, it will shift to its equilibrium state. Obviously, in this process, there are a series of nonequilibrium states. With the assistance of Bayesian statistics and hyperensemble, a probable probability distribution of these nonequilibrium states can be determined by maximizing the hyperensemble entropy. It is known that the largest probability is the equilibrium state, and the far a nonequilibrium state is away from the equilibrium one, the smaller the probability will be, and the same conclusion can also be obtained in the multi-state space. Furthermore, if the probability stands for the relative time the corresponding nonequilibrium state can stay, then the velocity of a nonequilibrium state returning back to its equilibrium can also be determined through the reciprocal of the derivative of this probability. It tells us that the far away the state from the equilibrium is, the faster the returning velocity will be; if the system is near to its equilibrium state, the velocity will tend to be smaller and smaller, and finally tends to 0 when it gets the equilibrium state.
Atiyeh, B.; Masellis, A.; Conte, C.
2009-01-01
Summary The present review of the literature aims at analysing the challenges facing burn management in low- and middleincome countries (LMICs) and exploring probable modalities to optimize burn management in these countries. In Part 1, the epidemiology of burn injuries and the formidable challenges for proper management due to limited resources and inaccessibility to sophisticated skills and technologies in LMICs were presented. Part II will discuss the actual state of burn injuries management in LMICs. PMID:21991180
Probabilistic Metrology Attains Macroscopic Cloning of Quantum Clocks
NASA Astrophysics Data System (ADS)
Gendra, B.; Calsamiglia, J.; Muñoz-Tapia, R.; Bagan, E.; Chiribella, G.
2014-12-01
It has recently been shown that probabilistic protocols based on postselection boost the performances of the replication of quantum clocks and phase estimation. Here we demonstrate that the improvements in these two tasks have to match exactly in the macroscopic limit where the number of clones grows to infinity, preserving the equivalence between asymptotic cloning and state estimation for arbitrary values of the success probability. Remarkably, the cloning fidelity depends critically on the number of rationally independent eigenvalues of the clock Hamiltonian. We also prove that probabilistic metrology can simulate cloning in the macroscopic limit for arbitrary sets of states when the performance of the simulation is measured by testing small groups of clones.
Landau-Zener transitions for Majorana fermions
NASA Astrophysics Data System (ADS)
Khlebnikov, Sergei
2018-05-01
One-dimensional systems obtained as low-energy limits of hybrid superconductor-topological insulator devices provide means of production, transport, and destruction of Majorana bound states (MBSs) by variations of the magnetic flux. When two or more pairs of MBSs are present in the intermediate state, there is a possibility of a Landau-Zener transition wherein even a slow variation of the flux leads to production of a quasiparticle pair. We study numerically a version of this process with four MBSs produced and subsequently destroyed and find that, quite universally, the probability of quasiparticle production in it is 50%. This implies that the effect may be a limiting factor in applications requiring a high degree of quantum coherence.
Yoshioka, S; Aso, Y; Takeda, Y
1990-06-01
Accelerated stability data obtained at a single temperature is statistically evaluated, and the utility of such data for assessment of stability is discussed focussing on the chemical stability of solution-state dosage forms. The probability that the drug content of a product is observed to be within the lower specification limit in the accelerated test is interpreted graphically. This probability depends on experimental errors in the assay and temperature control, as well as the true degradation rate and activation energy. Therefore, the observation that the drug content meets the specification in the accelerated testing can provide only limited information on the shelf-life of the drug, without the knowledge of the activation energy and the accuracy and precision of the assay and temperature control.
A performance-based approach to landslide risk analysis
NASA Astrophysics Data System (ADS)
Romeo, R. W.
2009-04-01
An approach for the risk assessment based on a probabilistic analysis of the performance of structures threatened by landslides is shown and discussed. The risk is a possible loss due to the occurrence of a potentially damaging event. Analytically the risk is the probability convolution of hazard, which defines the frequency of occurrence of the event (i.e., the demand), and fragility that defines the capacity of the system to withstand the event given its characteristics (i.e., severity) and those of the exposed goods (vulnerability), that is: Risk=p(D>=d|S,V) The inequality sets a damage (or loss) threshold beyond which the system's performance is no longer met. Therefore a consistent approach to risk assessment should: 1) adopt a probabilistic model which takes into account all the uncertainties of the involved variables (capacity and demand), 2) follow a performance approach based on given loss or damage thresholds. The proposed method belongs to the category of the semi-empirical ones: the theoretical component is given by the probabilistic capacity-demand model; the empirical component is given by the observed statistical behaviour of structures damaged by landslides. Two landslide properties alone are required: the area-extent and the type (or kinematism). All other properties required to determine the severity of landslides (such as depth, speed and frequency) are derived via probabilistic methods. The severity (or intensity) of landslides, in terms of kinetic energy, is the demand of resistance; the resistance capacity is given by the cumulative distribution functions of the limit state performance (fragility functions) assessed via damage surveys and cards compilation. The investigated limit states are aesthetic (of nominal concern alone), functional (interruption of service) and structural (economic and social losses). The damage probability is the probabilistic convolution of hazard (the probability mass function of the frequency of occurrence of given severities) and vulnerability (the probability of a limit state performance be reached, given a certain severity). Then, for each landslide all the exposed goods (structures and infrastructures) within the landslide area and within a buffer (representative of the maximum extension of a landslide given a reactivation), are counted. The risk is the product of the damage probability and the ratio of the exposed goods of each landslide to the whole assets exposed to the same type of landslides. Since the risk is computed numerically and by the same procedure applied to all landslides, it is free from any subjective assessment such as those implied in the qualitative methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Nilanjana; Rouzé, Cambyse; Pautrat, Yan
2016-06-15
Quantum Stein’s lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ{sup ⊗n} or σ{sup ⊗n}) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability α{sub n} of erroneously inferring the state to be σ, the probability β{sub n} of erroneously inferring the state to be ρmore » decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.« less
Hierarchical folding free energy landscape of HP35 revealed by most probable path clustering.
Jain, Abhinav; Stock, Gerhard
2014-07-17
Adopting extensive molecular dynamics simulations of villin headpiece protein (HP35) by Shaw and co-workers, a detailed theoretical analysis of the folding of HP35 is presented. The approach is based on the recently proposed most probable path algorithm which identifies the metastable states of the system, combined with dynamical coring of these states in order to obtain a consistent Markov state model. The method facilitates the construction of a dendrogram associated with the folding free-energy landscape of HP35, which reveals a hierarchical funnel structure and shows that the native state is rather a kinetic trap than a network hub. The energy landscape of HP35 consists of the entropic unfolded basin U, where the prestructuring of the protein takes place, the intermediate basin I, which is connected to U via the rate-limiting U → I transition state reflecting the formation of helix-1, and the native basin N, containing a state close to the NMR structure and a native-like state that exhibits enhanced fluctuations of helix-3. The model is in line with recent experimental observations that the intermediate and native states differ mostly in their dynamics (locked vs unlocked states). Employing dihedral angle principal component analysis, subdiffusive motion on a multidimensional free-energy surface is found.
NASA Astrophysics Data System (ADS)
Zhou, Dan; Wang, Kedong; Li, Xue
2018-07-01
This study calculates the potential energy curves of 18 Λ-S and 50 Ω states, which arise from the C(3Pg) + P+(3Pg) dissociation channel of the CP+ cation. The calculations are made using the CASSCF method, followed by the icMRCI approach with the Davidson correction. Core-valence correlation and scalar relativistic corrections, as well as extrapolation to the complete basis set limit are included. The transition dipole moments are computed for 25 pairs of Λ-S states. The spin-orbit coupling effect on the spectroscopic and vibrational properties is evaluated. The Franck-Condon factors and Einstein coefficients of emissions are calculated. Radiative lifetimes are obtained for several vibrational levels of some states. The transitions are evaluated and spectroscopic measurement schemes for observing these Λ-S states are proposed. The potential energy curves, spectroscopic constants, vibrational levels, transition dipole moments, and transition probabilities reported in this paper can be considered to be very accurate and reliable. Because no experimental observations are currently available, the results obtained here can be used as guidelines for the detection of these states in appropriate spectroscopy experiments, in particular for observations in stellar atmospheres and in interstellar space.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
McClure, Meredith L; Burdett, Christopher L; Farnsworth, Matthew L; Lutman, Mark W; Theobald, David M; Riggs, Philip D; Grear, Daniel A; Miller, Ryan S
2015-01-01
Wild pigs (Sus scrofa), also known as wild swine, feral pigs, or feral hogs, are one of the most widespread and successful invasive species around the world. Wild pigs have been linked to extensive and costly agricultural damage and present a serious threat to plant and animal communities due to their rooting behavior and omnivorous diet. We modeled the current distribution of wild pigs in the United States to better understand the physiological and ecological factors that may determine their invasive potential and to guide future study and eradication efforts. Using national-scale wild pig occurrence data reported between 1982 and 2012 by wildlife management professionals, we estimated the probability of wild pig occurrence across the United States using a logistic discrimination function and environmental covariates hypothesized to influence the distribution of the species. Our results suggest the distribution of wild pigs in the U.S. was most strongly limited by cold temperatures and availability of water, and that they were most likely to occur where potential home ranges had higher habitat heterogeneity, providing access to multiple key resources including water, forage, and cover. High probability of occurrence was also associated with frequent high temperatures, up to a high threshold. However, this pattern is driven by pigs' historic distribution in warm climates of the southern U.S. Further study of pigs' ability to persist in cold northern climates is needed to better understand whether low temperatures actually limit their distribution. Our model highlights areas at risk of invasion as those with habitat conditions similar to those found in pigs' current range that are also near current populations. This study provides a macro-scale approach to generalist species distribution modeling that is applicable to other generalist and invasive species.
NASA Astrophysics Data System (ADS)
Ronde, Christian De
In classical physics, probabilistic or statistical knowledge has been always related to ignorance or inaccurate subjective knowledge about an actual state of affairs. This idea has been extended to quantum mechanics through a completely incoherent interpretation of the Fermi-Dirac and Bose-Einstein statistics in terms of "strange" quantum particles. This interpretation, naturalized through a widespread "way of speaking" in the physics community, contradicts Born's physical account of Ψ as a "probability wave" which provides statistical information about outcomes that, in fact, cannot be interpreted in terms of `ignorance about an actual state of affairs'. In the present paper we discuss how the metaphysics of actuality has played an essential role in limiting the possibilities of understating things differently. We propose instead a metaphysical scheme in terms of immanent powers with definite potentia which allows us to consider quantum probability in a new light, namely, as providing objective knowledge about a potential state of affairs.
Freezing in stripe states for kinetic Ising models: a comparative study of three dynamics
NASA Astrophysics Data System (ADS)
Godrèche, Claude; Pleimling, Michel
2018-04-01
We present a comparative study of the fate of an Ising ferromagnet on the square lattice with periodic boundary conditions evolving under three different zero-temperature dynamics. The first one is Glauber dynamics, the two other dynamics correspond to two limits of the directed Ising model, defined by rules that break the full symmetry of the former, yet sharing the same Boltzmann-Gibbs distribution at stationarity. In one of these limits the directed Ising model is reversible, in the other one it is irreversible. For the kinetic Ising-Glauber model, several recent studies have demonstrated the role of critical percolation to predict the probabilities for the system to reach the ground state or to fall in a metastable state. We investigate to what extent the predictions coming from critical percolation still apply to the two other dynamics.
A hydroclimatological approach to predicting regional landslide probability using Landlab
NASA Astrophysics Data System (ADS)
Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.
2018-02-01
We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.
Globally coupled stochastic two-state oscillators: fluctuations due to finite numbers.
Pinto, Italo'Ivo Lima Dias; Escaff, Daniel; Harbola, Upendra; Rosas, Alexandre; Lindenberg, Katja
2014-05-01
Infinite arrays of coupled two-state stochastic oscillators exhibit well-defined steady states. We study the fluctuations that occur when the number N of oscillators in the array is finite. We choose a particular form of global coupling that in the infinite array leads to a pitchfork bifurcation from a monostable to a bistable steady state, the latter with two equally probable stationary states. The control parameter for this bifurcation is the coupling strength. In finite arrays these states become metastable: The fluctuations lead to distributions around the most probable states, with one maximum in the monostable regime and two maxima in the bistable regime. In the latter regime, the fluctuations lead to transitions between the two peak regions of the distribution. Also, we find that the fluctuations break the symmetry in the bimodal regime, that is, one metastable state becomes more probable than the other, increasingly so with increasing array size. To arrive at these results, we start from microscopic dynamical evolution equations from which we derive a Langevin equation that exhibits an interesting multiplicative noise structure. We also present a master equation description of the dynamics. Both of these equations lead to the same Fokker-Planck equation, the master equation via a 1/N expansion and the Langevin equation via standard methods of Itô calculus for multiplicative noise. From the Fokker-Planck equation we obtain an effective potential that reflects the transition from the monomodal to the bimodal distribution as a function of a control parameter. We present a variety of numerical and analytic results that illustrate the strong effects of the fluctuations. We also show that the limits N → ∞ and t → ∞ (t is the time) do not commute. In fact, the two orders of implementation lead to drastically different results.
Globally coupled stochastic two-state oscillators: Fluctuations due to finite numbers
NASA Astrophysics Data System (ADS)
Pinto, Italo'Ivo Lima Dias; Escaff, Daniel; Harbola, Upendra; Rosas, Alexandre; Lindenberg, Katja
2014-05-01
Infinite arrays of coupled two-state stochastic oscillators exhibit well-defined steady states. We study the fluctuations that occur when the number N of oscillators in the array is finite. We choose a particular form of global coupling that in the infinite array leads to a pitchfork bifurcation from a monostable to a bistable steady state, the latter with two equally probable stationary states. The control parameter for this bifurcation is the coupling strength. In finite arrays these states become metastable: The fluctuations lead to distributions around the most probable states, with one maximum in the monostable regime and two maxima in the bistable regime. In the latter regime, the fluctuations lead to transitions between the two peak regions of the distribution. Also, we find that the fluctuations break the symmetry in the bimodal regime, that is, one metastable state becomes more probable than the other, increasingly so with increasing array size. To arrive at these results, we start from microscopic dynamical evolution equations from which we derive a Langevin equation that exhibits an interesting multiplicative noise structure. We also present a master equation description of the dynamics. Both of these equations lead to the same Fokker-Planck equation, the master equation via a 1/N expansion and the Langevin equation via standard methods of Itô calculus for multiplicative noise. From the Fokker-Planck equation we obtain an effective potential that reflects the transition from the monomodal to the bimodal distribution as a function of a control parameter. We present a variety of numerical and analytic results that illustrate the strong effects of the fluctuations. We also show that the limits N →∞ and t →∞ (t is the time) do not commute. In fact, the two orders of implementation lead to drastically different results.
Fingelkurts, Alexander A.; Fingelkurts, Andrew A.
2014-01-01
For the first time the dynamic repertoires and oscillatory types of local EEG states in 13 diverse conditions (examined over 9 studies) that covered healthy-normal, altered and pathological brain states were quantified within the same methodological and conceptual framework. EEG oscillatory states were assessed by the probability-classification analysis of short-term EEG spectral patterns. The results demonstrated that brain activity consists of a limited repertoire of local EEG states in any of the examined conditions. The size of the state repertoires was associated with changes in cognition and vigilance or neuropsychopathologic conditions. Additionally universal, optional and unique EEG states across 13 diverse conditions were observed. It was demonstrated also that EEG oscillations which constituted EEG states were characteristic for different groups of conditions in accordance to oscillations’ functional significance. The results suggested that (a) there is a limit in the number of local states available to the cortex and many ways in which these local states can rearrange themselves and still produce the same global state and (b) EEG individuality is determined by varying proportions of universal, optional and unique oscillatory states. The results enriched our understanding about dynamic microstructure of EEG-signal. PMID:24505292
Meeting the need for personal care among the elderly: does Medicaid home care spending matter?
Kemper, Peter; Weaver, France; Short, Pamela Farley; Shea, Dennis; Kang, Hyojin
2008-02-01
To determine whether Medicaid home care spending reduces the proportion of the disabled elderly population who do not get help with personal care. Data on Medicaid home care spending per poor elderly person in each state is merged with data from the Medicare Current Beneficiary Survey for 1992, 1996, and 2000. The sample (n=6,067) includes elderly persons living in the community who have at least one limitation in activities of daily living (ADLs). Using a repeated cross-section analysis, the probability of not getting help with an ADL is estimated as a function of Medicaid home care spending, individual income, interactions between income and spending, and a set of individual characteristics. Because Medicaid home care spending is targeted at the low-income population, it is not expected to affect the population with higher incomes. We exploit this difference by using higher-income groups as comparison groups to assess whether unobserved state characteristics bias the estimates. Among the low-income disabled elderly, the probability of not receiving help with an ADL limitation is about 10 percentage points lower in states in the top quartile of per capita Medicaid home care spending than in other states. No such association is observed in higher-income groups. These results are robust to a set of sensitivity analyses of the methods. These findings should reassure state and federal policymakers considering expanding Medicaid home care programs that they do deliver services to low-income people with long-term care needs and reduce the percent of those who are not getting help.
ERIC Educational Resources Information Center
Ruckle, L. J.; Belloni, M.; Robinett, R. W.
2012-01-01
The biharmonic oscillator and the asymmetric linear well are two confining power-law-type potentials for which complete bound-state solutions are possible in both classical and quantum mechanics. We examine these problems in detail, beginning with studies of their trajectories in position and momentum space, evaluation of the classical probability…
Transition probabilities in neutron-rich Se,8280 and the role of the ν g9 /2 orbital
NASA Astrophysics Data System (ADS)
Litzinger, J.; Blazhev, A.; Dewald, A.; Didierjean, F.; Duchêne, G.; Fransen, C.; Lozeva, R.; Verney, D.; de Angelis, G.; Bazzacco, D.; Birkenbach, B.; Bottoni, S.; Bracco, A.; Braunroth, T.; Cederwall, B.; Corradi, L.; Crespi, F. C. L.; Désesquelles, P.; Eberth, J.; Ellinger, E.; Farnea, E.; Fioretto, E.; Gernhäuser, R.; Goasduff, A.; Görgen, A.; Gottardo, A.; Grebosz, J.; Hackstein, M.; Hess, H.; Ibrahim, F.; Jolie, J.; Jungclaus, A.; Kolos, K.; Korten, W.; Leoni, S.; Lunardi, S.; Maj, A.; Menegazzo, R.; Mengoni, D.; Michelagnoli, C.; Mijatovic, T.; Million, B.; Möller, O.; Modamio, V.; Montagnoli, G.; Montanari, D.; Morales, A. I.; Napoli, D. R.; Niikura, M.; Pietralla, N.; Pollarolo, G.; Pullia, A.; Quintana, B.; Recchia, F.; Reiter, P.; Rosso, D.; Sahin, E.; Salsac, M. D.; Scarlassara, F.; Söderström, P.-A.; Stefanini, A. M.; Stezowski, O.; Szilner, S.; Theisen, Ch.; Valiente-Dobón, J. J.; Vandone, V.; Vogt, A.
2018-04-01
Transition probabilities of intermediate-spin yrast and non-yrast excitations in Se,8280 were investigated in a recoil distance Doppler-shift (RDDS) experiment performed at the Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali di Legnaro. The Cologne Plunger device for deep inelastic scattering was used for the RDDS technique and was combined with the AGATA Demonstrator array for the γ -ray detection and coupled to the PRISMA magnetic spectrometer for an event-by-event particle identification. In 80Se, the level lifetimes of the yrast (61+) and (81+) states and of a non-yrast band feeding the yrast 41+ state are determined. A spin and parity assignment of the head of this sideband is discussed based on the experimental results and supported by large-scale shell-model calculations. In 82Se, the level lifetimes of the yrast 61+ state and the yrare 42+ state and lifetime limits of the yrast (101+) state and of the 51- state are determined. Although the experimental results contain large uncertainties, they are interpreted with care in terms of large-scale shell-model calculations using the effective interactions JUN45 and jj44b. The excited states' wave functions are investigated and discussed with respect to the role of the neutron g9 /2 orbital.
Ignorance isn't bliss: why patients become angry.
Sonnenberg, Amnon
2015-06-01
Patients with cognitive limitations may struggle understanding complex arguments and feel overwhelmed by the need to choose among medical options that they poorly understand. Such struggle may result in frustration and anger directed at the physician. The aim of the present study is to explain the characteristics underlying such situations. A decision tree is modeled to capture the choice that every patient has to make after receiving medical advice. Patient choices are phrased in terms of a threshold probability for accepting or rejecting advice by physicians. To a patient with poor understanding of medical exigencies all differences between present or absent disease state, prognosis, and risks of intervention may seem largely arbitrary and meaningless. With little or no guidance to make an informed decision, taking any medical action is deemed wasted and harmful, whereas inaction leaves the underlying medical problem unsolved. Both choices appear equally ineffective with respect to the patient's symptoms and therefore unappealing. As shown by applying threshold analysis to a patient in a state of ignorance, no threshold probability for following medical advice exists. Patients with cognitive limitations will become frustrated and angry by a seemingly dismal situation without good alternatives to choose from.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, O.; Roa, Luis; Delgado, A.
We study the probabilistic cloning of equidistant states. These states are such that the inner product between them is a complex constant or its conjugate. Thereby, it is possible to study their cloning in a simple way. In particular, we are interested in the behavior of the cloning probability as a function of the phase of the overlap among the involved states. We show that for certain families of equidistant states Duan and Guo's cloning machine leads to cloning probabilities lower than the optimal unambiguous discrimination probability of equidistant states. We propose an alternative cloning machine whose cloning probability ismore » higher than or equal to the optimal unambiguous discrimination probability for any family of equidistant states. Both machines achieve the same probability for equidistant states whose inner product is a positive real number.« less
Spectral Analysis of Two Coupled Diatomic Rotor Molecules
Crogman, Horace T.; Harter, William G.
2014-01-01
In a previous article the theory of frame transformation relation between Body Oriented Angular (BOA) states and Lab Weakly Coupled states (LWC) was developed to investigate simple rotor–rotor interactions. By analyzing the quantum spectrum for two coupled diatomic molecules and comparing it with spectrum and probability distribution of simple models, evidence was found that, as we move from a LWC state to a strongly coupled state, a single rotor emerges in the strong limit. In the low coupling, the spectrum was quadratic which indicates the degree of floppiness in the rotor–rotor system. However in the high coupling behavior it was found that the spectrum was linear which corresponds to a rotor deep in a well. PMID:25353181
Instructional television utilization in the United States
NASA Technical Reports Server (NTRS)
Dumolin, J. R.
1971-01-01
Various aspects of utilizing instructional television (ITV) are summarized and evaluated and basic guidelines for future utilization of television as an instructional medium in education are considered. The role of technology in education, capabilities and limitations of television as an instructional media system and the state of ITV research efforts are discussed. Examples of various ongoing ITV programs are given and summarized. The problems involved in the three stages of the ITV process (production, distribution, and classroom utilization) are presented. A summary analysis outlines probable trends in future utilization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moller, Peter; Pereira, J; Hennrich, S
Measurements of the {beta}-decay properties of A {approx}< 110 r-process nuclei have been completed at the National Superconducting Cyclotron Laboratory, at Michigan State University. {beta}-decay half-lives for {sup 105}Y, {sup 106,107}Zr and {sup 108,111}Mo, along with ,B-delayed neutron emission probabilities of 104Y, 109,11OMo and upper limits for 105Y, 103-107Zr and 108,111 Mo have been measured for the first time. Studies on the basis of the quasi-random phase approximation are used to analyze the ground-state deformation of these nuclei.
Near optimal discrimination of binary coherent signals via atom–light interaction
NASA Astrophysics Data System (ADS)
Han, Rui; Bergou, János A.; Leuchs, Gerd
2018-04-01
We study the discrimination of weak coherent states of light with significant overlaps by nondestructive measurements on the light states through measuring atomic states that are entangled to the coherent states via dipole coupling. In this way, the problem of measuring and discriminating coherent light states is shifted to finding the appropriate atom–light interaction and atomic measurements. We show that this scheme allows us to attain a probability of error extremely close to the Helstrom bound, the ultimate quantum limit for discriminating binary quantum states, through the simple Jaynes–Cummings interaction between the field and ancilla with optimized light–atom coupling and projective measurements on the atomic states. Moreover, since the measurement is nondestructive on the light state, information that is not detected by one measurement can be extracted from the post-measurement light states through subsequent measurements.
Probabilistic Cloning of Three Real States with Optimal Success Probabilities
NASA Astrophysics Data System (ADS)
Rui, Pin-shu
2017-06-01
We investigate the probabilistic quantum cloning (PQC) of three real states with average probability distribution. To get the analytic forms of the optimal success probabilities we assume that the three states have only two pairwise inner products. Based on the optimal success probabilities, we derive the explicit form of 1 →2 PQC for cloning three real states. The unitary operation needed in the PQC process is worked out too. The optimal success probabilities are also generalized to the M→ N PQC case.
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
Manktelow, Bradley N.; Seaton, Sarah E.
2012-01-01
Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202
Fokker-Planck description of conductance-based integrate-and-fire neuronal networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovacic, Gregor; Tao, Louis; Rangan, Aaditya V.
2009-08-15
Steady dynamics of coupled conductance-based integrate-and-fire neuronal networks in the limit of small fluctuations is studied via the equilibrium states of a Fokker-Planck equation. An asymptotic approximation for the membrane-potential probability density function is derived and the corresponding gain curves are found. Validity conditions are discussed for the Fokker-Planck description and verified via direct numerical simulations.
The PX-AMS system and its applications at CIAE
NASA Astrophysics Data System (ADS)
He, Ming; Jiang, Shan; Jiang, Songsheng; Wu, ShaoYong; Guo, Gang
2004-08-01
The projectile X-ray detection method (PXD) has been set up in the China Institute of Atomic Energy AMS system. Using this method, the half-life of 79Se and 75Se have been measured, the intensity of 64Cu radioactive nuclear beams have been identified, and the upper limit of transition probability of the first excited state of 64Cu was obtained.
a Probability Model for Drought Prediction Using Fusion of Markov Chain and SAX Methods
NASA Astrophysics Data System (ADS)
Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.
2017-09-01
Drought is one of the most powerful natural disasters which are affected on different aspects of the environment. Most of the time this phenomenon is immense in the arid and semi-arid area. Monitoring and prediction the severity of the drought can be useful in the management of the natural disaster caused by drought. Many indices were used in predicting droughts such as SPI, VCI, and TVX. In this paper, based on three data sets (rainfall, NDVI, and land surface temperature) which are acquired from MODIS satellite imagery, time series of SPI, VCI, and TVX in time limited between winters 2000 to summer 2015 for the east region of Isfahan province were created. Using these indices and fusion of symbolic aggregation approximation and hidden Markov chain drought was predicted for fall 2015. For this purpose, at first, each time series was transformed into the set of quality data based on the state of drought (5 group) by using SAX algorithm then the probability matrix for the future state was created by using Markov hidden chain. The fall drought severity was predicted by fusion the probability matrix and state of drought severity in summer 2015. The prediction based on the likelihood for each state of drought includes severe drought, middle drought, normal drought, severe wet and middle wet. The analysis and experimental result from proposed algorithm show that the product of this algorithm is acceptable and the proposed algorithm is appropriate and efficient for predicting drought using remote sensor data.
Limited-path-length entanglement percolation in quantum complex networks
NASA Astrophysics Data System (ADS)
Cuquet, Martí; Calsamiglia, John
2011-03-01
We study entanglement distribution in quantum complex networks where nodes are connected by bipartite entangled states. These networks are characterized by a complex structure, which dramatically affects how information is transmitted through them. For pure quantum state links, quantum networks exhibit a remarkable feature absent in classical networks: it is possible to effectively rewire the network by performing local operations on the nodes. We propose a family of such quantum operations that decrease the entanglement percolation threshold of the network and increase the size of the giant connected component. We provide analytic results for complex networks with an arbitrary (uncorrelated) degree distribution. These results are in good agreement with numerical simulations, which also show enhancement in correlated and real-world networks. The proposed quantum preprocessing strategies are not robust in the presence of noise. However, even when the links consist of (noisy) mixed-state links, one can send quantum information through a connecting path with a fidelity that decreases with the path length. In this noisy scenario, complex networks offer a clear advantage over regular lattices, namely, the fact that two arbitrary nodes can be connected through a relatively small number of steps, known as the small-world effect. We calculate the probability that two arbitrary nodes in the network can successfully communicate with a fidelity above a given threshold. This amounts to working out the classical problem of percolation with a limited path length. We find that this probability can be significant even for paths limited to few connections and that the results for standard (unlimited) percolation are soon recovered if the path length exceeds by a finite amount the average path length, which in complex networks generally scales logarithmically with the size of the network.
Role of conviction in nonequilibrium models of opinion formation
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Anteneodo, Celia
2012-12-01
We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Emotion and decision-making: affect-driven belief systems in anxiety and depression.
Paulus, Martin P; Yu, Angela J
2012-09-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches that link emotion and decision-making, and focus on research with anxious or depressed individuals to show how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression, and outline the broader implications of this approach. Copyright © 2012. Published by Elsevier Ltd.
Emotion and decision-making: affect-driven belief systems in anxiety and depression
Paulus, Martin P.; Yu, Angela J.
2012-01-01
Emotion processing and decision-making are integral aspects of daily life. However, our understanding of the interaction between these constructs is limited. In this review, we summarize theoretical approaches to the link between emotion and decision-making, and focus on research with anxious or depressed individuals that reveals how emotions can interfere with decision-making. We integrate the emotional framework based on valence and arousal with a Bayesian approach to decision-making in terms of probability and value processing. We then discuss how studies of individuals with emotional dysfunctions provide evidence that alterations of decision-making can be viewed in terms of altered probability and value computation. We argue that the probabilistic representation of belief states in the context of partially observable Markov decision processes provides a useful approach to examine alterations in probability and value representation in individuals with anxiety and depression and outline the broader implications of this approach. PMID:22898207
Theoretical studies of the electronic spectrum of tellurium monosulfide.
Chattopadhyaya, Surya; Nath, Abhijit; Das, Kalyan Kumar
2013-08-01
Ab initio based multireference singles and doubles configuration interaction (MRDCI) study including spin-orbit coupling is carried out to explore the electronic structure and spectroscopic properties of tellurium monosulfide (TeS) molecule by employing relativistic effective core potentials (RECP) and suitable Gaussian basis sets of the constituent atoms. Potential energy curves correlating with the lowest and second dissociation limit are constructed and spectroscopic constants (T(e), r(e), and ω(e)) of several low-lying bound Λ-S electronic states up to 3.68 eV of energy are computed. The binding energies and electric dipole moments (μ(e)) of the ground and the low-lying excited Λ-S states are also computed. The effects of the spin-orbit coupling on the electronic spectrum of the species are studied in details and compared with the available data. The transition probabilities of some dipole-allowed and spin-forbidden transitions are computed and radiative lifetimes of some excited states at lowest vibrational level are estimated from the transition probability data. Copyright © 2013 Elsevier B.V. All rights reserved.
Quantum dynamics of a plane pendulum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leibscher, Monika; Schmidt, Burkhard
A semianalytical approach to the quantum dynamics of a plane pendulum is developed, based on Mathieu functions which appear as stationary wave functions. The time-dependent Schroedinger equation is solved for pendular analogs of coherent and squeezed states of a harmonic oscillator, induced by instantaneous changes of the periodic potential energy function. Coherent pendular states are discussed between the harmonic limit for small displacements and the inverted pendulum limit, while squeezed pendular states are shown to interpolate between vibrational and free rotational motion. In the latter case, full and fractional revivals as well as spatiotemporal structures in the time evolution ofmore » the probability densities (quantum carpets) are quantitatively analyzed. Corresponding expressions for the mean orientation are derived in terms of Mathieu functions in time. For periodic double well potentials, different revival schemes, and different quantum carpets are found for the even and odd initial states forming the ground tunneling doublet. Time evolution of the mean alignment allows the separation of states with different parity. Implications for external (rotational) and internal (torsional) motion of molecules induced by intense laser fields are discussed.« less
Huegun, Arrate; Fernández, Mercedes; Peña, Juanjo; Muñoz, María Eugenia; Santamaría, Antxon
2013-01-01
Non-modified Multiwalled Carbon Nanotubes (MWCNT) and polypropylene (PP) in absence of compatibilizer have been chosen to elaborate MWCNT/PP nanocomposites using a simple melt-mixing dispersing method. Calorimetry results indicate little effect of MWCNTs on crystallinity of PP, revealing not much interaction between nanotubes and PP chains, which is compatible with the employed manufacturing procedure. In any case, a hindering of polymer chains motion by MWCNTs is observed in the molten state, using oscillatory flow experiments, and a rheological percolation threshold is determined. The percolation limit is not noticed by Pressure-Volume-Temperature (PVT) measurements in the melt, because this technique rather detects local motions. Keeping the nanocomposites in the molten state provokes an electrical conductivity increase of several orders of magnitude, but on ulterior crystallization, the conductivity decreases, probably due to a reduction of the ionic conductivity. For a concentration of 2% MWCNTs, in the limit of percolation, the conductivity decreases considerably more, because percolation network constituted in the molten state is unstable and is destroyed during crystallization. PMID:28348329
Kocher, David C; Apostoaei, A Iulian; Henshaw, Russell W; Hoffman, F Owen; Schubauer-Berigan, Mary K; Stancescu, Daniel O; Thomas, Brian A; Trabalka, John R; Gilbert, Ethel S; Land, Charles E
2008-07-01
The Interactive RadioEpidemiological Program (IREP) is a Web-based, interactive computer code that is used to estimate the probability that a given cancer in an individual was induced by given exposures to ionizing radiation. IREP was developed by a Working Group of the National Cancer Institute and Centers for Disease Control and Prevention, and was adopted and modified by the National Institute for Occupational Safety and Health (NIOSH) for use in adjudicating claims for compensation for cancer under the Energy Employees Occupational Illness Compensation Program Act of 2000. In this paper, the quantity calculated in IREP is referred to as "probability of causation/assigned share" (PC/AS). PC/AS for a given cancer in an individual is calculated on the basis of an estimate of the excess relative risk (ERR) associated with given radiation exposures and the relationship PC/AS = ERR/ERR+1. IREP accounts for uncertainties in calculating probability distributions of ERR and PC/AS. An accounting of uncertainty is necessary when decisions about granting claims for compensation for cancer are made on the basis of an estimate of the upper 99% credibility limit of PC/AS to give claimants the "benefit of the doubt." This paper discusses models and methods incorporated in IREP to estimate ERR and PC/AS. Approaches to accounting for uncertainty are emphasized, and limitations of IREP are discussed. Although IREP is intended to provide unbiased estimates of ERR and PC/AS and their uncertainties to represent the current state of knowledge, there are situations described in this paper in which NIOSH, as a matter of policy, makes assumptions that give a higher estimate of the upper 99% credibility limit of PC/AS than other plausible alternatives and, thus, are more favorable to claimants.
Migration intentions and illicit substance use among youth in central Mexico.
Marsiglia, Flavio Francisco; Kulis, Stephen; Hoffman, Steven; Calderón-Tena, Carlos Orestes; Becerra, David; Alvarez, Diana
2011-01-01
This study explored intentions to emigrate and substance use among youth (ages 14-24) from a central Mexico state with high emigration rates. Questionnaires were completed in 2007 by 702 students attending a probability sample of alternative secondary schools serving remote or poor communities. Linear and logistic regression analyses indicated that stronger intentions to emigrate predicted greater access to drugs, drug offers, and use of illicit drugs (marijuana, cocaine, inhalants), but not alcohol or cigarettes. Results are related to the healthy migrant theory and its applicability to youth with limited educational opportunities. The study's limitations are noted.
Gaussification and entanglement distillation of continuous-variable systems: a unifying picture.
Campbell, Earl T; Eisert, Jens
2012-01-13
Distillation of entanglement using only Gaussian operations is an important primitive in quantum communication, quantum repeater architectures, and distributed quantum computing. Existing distillation protocols for continuous degrees of freedom are only known to converge to a Gaussian state when measurements yield precisely the vacuum outcome. In sharp contrast, non-Gaussian states can be deterministically converted into Gaussian states while preserving their second moments, albeit by usually reducing their degree of entanglement. In this work-based on a novel instance of a noncommutative central limit theorem-we introduce a picture general enough to encompass the known protocols leading to Gaussian states, and new classes of protocols including multipartite distillation. This gives the experimental option of balancing the merits of success probability against entanglement produced.
Optimal sequential measurements for bipartite state discrimination
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.; Weir, Graeme
2017-05-01
State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.
Luo, Xiaosheng; Xu, Liufang; Han, Bo; Wang, Jin
2017-09-01
Using fission yeast cell cycle as an example, we uncovered that the non-equilibrium network dynamics and global properties are determined by two essential features: the potential landscape and the flux landscape. These two landscapes can be quantified through the decomposition of the dynamics into the detailed balance preserving part and detailed balance breaking non-equilibrium part. While the funneled potential landscape is often crucial for the stability of the single attractor networks, we have uncovered that the funneled flux landscape is crucial for the emergence and maintenance of the stable limit cycle oscillation flow. This provides a new interpretation of the origin for the limit cycle oscillations: There are many cycles and loops existed flowing through the state space and forming the flux landscapes, each cycle with a probability flux going through the loop. The limit cycle emerges when a loop stands out and carries significantly more probability flux than other loops. We explore how robustness ratio (RR) as the gap or steepness versus averaged variations or roughness of the landscape, quantifying the degrees of the funneling of the underlying potential and flux landscapes. We state that these two landscapes complement each other with one crucial for stabilities of states on the cycle and the other crucial for the stability of the flow along the cycle. The flux is directly related to the speed of the cell cycle. This allows us to identify the key factors and structure elements of the networks in determining the stability, speed and robustness of the fission yeast cell cycle oscillations. We see that the non-equilibriumness characterized by the degree of detailed balance breaking from the energy pump quantified by the flux is the cause of the energy dissipation for initiating and sustaining the replications essential for the origin and evolution of life. Regulating the cell cycle speed is crucial for designing the prevention and curing strategy of cancer.
2017-01-01
Using fission yeast cell cycle as an example, we uncovered that the non-equilibrium network dynamics and global properties are determined by two essential features: the potential landscape and the flux landscape. These two landscapes can be quantified through the decomposition of the dynamics into the detailed balance preserving part and detailed balance breaking non-equilibrium part. While the funneled potential landscape is often crucial for the stability of the single attractor networks, we have uncovered that the funneled flux landscape is crucial for the emergence and maintenance of the stable limit cycle oscillation flow. This provides a new interpretation of the origin for the limit cycle oscillations: There are many cycles and loops existed flowing through the state space and forming the flux landscapes, each cycle with a probability flux going through the loop. The limit cycle emerges when a loop stands out and carries significantly more probability flux than other loops. We explore how robustness ratio (RR) as the gap or steepness versus averaged variations or roughness of the landscape, quantifying the degrees of the funneling of the underlying potential and flux landscapes. We state that these two landscapes complement each other with one crucial for stabilities of states on the cycle and the other crucial for the stability of the flow along the cycle. The flux is directly related to the speed of the cell cycle. This allows us to identify the key factors and structure elements of the networks in determining the stability, speed and robustness of the fission yeast cell cycle oscillations. We see that the non-equilibriumness characterized by the degree of detailed balance breaking from the energy pump quantified by the flux is the cause of the energy dissipation for initiating and sustaining the replications essential for the origin and evolution of life. Regulating the cell cycle speed is crucial for designing the prevention and curing strategy of cancer. PMID:28892489
Examining Clandestine Social Networks for the Presence of Non-Random Structure
2007-03-01
illustrating the potential use for the defense community and others. v AFIT/GOR/ENS/07-24 Dedication...potential use it has for the United States Air Force and wider Department of Defense community . 2-1 2. Literature Review 2.1. Introduction In the...Leinhardt successfully found a way to test dyadic ties for probability of existence. Unfortunately, the p1 model was limited to only being able to
Influx: A Tool and Framework for Reasoning under Uncertainty
2015-09-01
Interfaces to external programs Not all types of problems are naturally suited to being entirely modelled and implemented within Influx1. In general... development pertaining to the implementation of the reasoning tool and specific applications are not included in this document. RELEASE LIMITATION...which case a probability is supposed to reflect the subjective belief of an agent for the problem at hand ( based on its experience and/or current state
Quantum teleportation scheme by selecting one of multiple output ports
NASA Astrophysics Data System (ADS)
Ishizaka, Satoshi; Hiroshima, Tohya
2009-04-01
The scheme of quantum teleportation, where Bob has multiple (N) output ports and obtains the teleported state by simply selecting one of the N ports, is thoroughly studied. We consider both the deterministic version and probabilistic version of the teleportation scheme aiming to teleport an unknown state of a qubit. Moreover, we consider two cases for each version: (i) the state employed for the teleportation is fixed to a maximally entangled state and (ii) the state is also optimized as well as Alice’s measurement. We analytically determine the optimal protocols for all the four cases and show the corresponding optimal fidelity or optimal success probability. All these protocols can achieve the perfect teleportation in the asymptotic limit of N→∞ . The entanglement properties of the teleportation scheme are also discussed.
NASA Astrophysics Data System (ADS)
Zoback, Mark
2017-04-01
In this talk, I will address the likelihood for fault slip to occur in response to fluid injection and the likely magnitude of potentially induced earthquakes. First, I will review a methodology that applies Quantitative Risk Assessment to calculate the probability of a fault exceeding Mohr-Coulomb slip criteria. The methodology utilizes information about the local state of stress, fault strike and dip and the estimated pore pressure perturbation to predict the probability of the fault slip as a function of time. Uncertainties in the input parameters are utilized to assess the probability of slip on known faults due to the predictable pore pressure perturbations. Application to known faults in Oklahoma has been presented by Walsh and Zoback (Geology, 2016). This has been updated with application to the previously unknown faults associated with M >5 earthquakes in the state. Second, I will discuss two geologic factors that limit the magnitudes of earthquakes (either natural or induced) in sedimentary sequences. Fundamentally, the layered nature of sedimentary rocks means that seismogenic fault slip will be limited by i) the velocity strengthening frictional properties of clay- and carbonate-rich rock sequences (Kohli and Zoback, JGR, 2013; in prep) and ii) viscoplastic stress relaxation in rocks with similar composition (Sone and Zoback, Geophysics, 2013a, b; IJRM, 2014; Rassouli and Zoback, in prep). In the former case, if fault slip is triggered in these types of rocks, it would likely be aseismic due the velocity strengthening behavior of faults. In the latter case, the stress relaxation could result in rupture termination in viscoplastic formations. In both cases, the stratified nature of sedimentary rock sequences could limit the magnitude of potentially induced earthquakes. Moreover, even when injection into sedimentary rocks initiates fault slip, earthquakes large enough to cause damage will usually require slip on faults sufficiently large that they extend into basement. This suggests that an important criterion for large-scale CO2 sequestration projects is that the injection zone is isolated from crystalline basement rocks by viscoplastic shales to prevent rupture propagation from extending down into basement.
Using hidden Markov models to align multiple sequences.
Mount, David W
2009-07-01
A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.
Transition probabilities of health states for workers in Malaysia using a Markov chain model
NASA Astrophysics Data System (ADS)
Samsuddin, Shamshimah; Ismail, Noriszura
2017-04-01
The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.
Search for b→u transitions in B±→[K∓π±π0]DK± decays
NASA Astrophysics Data System (ADS)
Lees, J. P.; Poireau, V.; Tisserand, V.; Garra Tico, J.; Grauges, E.; Martinelli, M.; Milanes, D. A.; Palano, A.; Pappagallo, M.; Eigen, G.; Stugu, B.; Sun, L.; Brown, D. N.; Kerth, L. T.; Kolomensky, Yu. G.; Lynch, G.; Koch, H.; Schroeder, T.; Asgeirsson, D. J.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Khan, A.; Blinov, V. E.; Buzykaev, A. R.; Druzhinin, V. P.; Golubev, V. B.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Todyshev, K. Yu.; Yushkov, A. N.; Bondioli, M.; Curry, S.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Stoker, D. P.; Atmacan, H.; Gary, J. W.; Liu, F.; Long, O.; Vitug, G. M.; Campagnari, C.; Hong, T. M.; Kovalskyi, D.; Richman, J. D.; West, C. A.; Eisner, A. M.; Kroseberg, J.; Lockman, W. S.; Martinez, A. J.; Schalk, T.; Schumm, B. A.; Seiden, A.; Cheng, C. H.; Doll, D. A.; Echenard, B.; Flood, K. T.; Hitlin, D. G.; Ongmongkolkul, P.; Porter, F. C.; Rakitin, A. Y.; Andreassen, R.; Dubrovin, M. S.; Meadows, B. T.; Sokoloff, M. D.; Bloom, P. C.; Ford, W. T.; Gaz, A.; Nagel, M.; Nauenberg, U.; Smith, J. G.; Wagner, S. R.; Ayad, R.; Toki, W. H.; Spaan, B.; Kobel, M. J.; Schubert, K. R.; Schwierz, R.; Bernard, D.; Verderi, M.; Clark, P. J.; Playfer, S.; Watson, J. E.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Fioravanti, E.; Garzia, I.; Luppi, E.; Munerato, M.; Negrini, M.; Piemontese, L.; Baldini-Ferroli, R.; Calcaterra, A.; de Sangro, R.; Finocchiaro, G.; Nicolaci, M.; Pacetti, S.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Rama, M.; Zallo, A.; Contri, R.; Guido, E.; Lo Vetere, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Bhuyan, B.; Prasad, V.; Lee, C. L.; Morii, M.; Edwards, A. J.; Adametz, A.; Marks, J.; Uwer, U.; Bernlochner, F. U.; Ebert, M.; Lacker, H. M.; Lueck, T.; Dauncey, P. D.; Tibbetts, M.; Behera, P. K.; Mallik, U.; Chen, C.; Cochran, J.; Crawley, H. B.; Meyer, W. T.; Prell, S.; Rosenberg, E. I.; Rubin, A. E.; Gritsan, A. V.; Guo, Z. J.; Arnaud, N.; Davier, M.; Derkach, D.; Grosdidier, G.; Le Diberder, F.; Lutz, A. M.; Malaescu, B.; Roudeau, P.; Schune, M. H.; Stocchi, A.; Wormser, G.; Lange, D. J.; Wright, D. M.; Bingham, I.; Chavez, C. A.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Hutchcroft, D. E.; Payne, D. J.; Touramanis, C.; Bevan, A. J.; di Lodovico, F.; Sacco, R.; Sigamani, M.; Cowan, G.; Paramesvaran, S.; Brown, D. N.; Davis, C. L.; Denig, A. G.; Fritsch, M.; Gradl, W.; Hafner, A.; Prencipe, E.; Alwyn, K. E.; Bailey, D.; Barlow, R. J.; Jackson, G.; Lafferty, G. D.; Cenci, R.; Hamilton, B.; Jawahery, A.; Roberts, D. A.; Simi, G.; Dallapiccola, C.; Cowan, R.; Dujmic, D.; Sciolla, G.; Lindemann, D.; Patel, P. M.; Robertson, S. H.; Schram, M.; Biassoni, P.; Lazzaro, A.; Lombardo, V.; Palombo, F.; Stracka, S.; Cremaldi, L.; Godang, R.; Kroeger, R.; Sonnek, P.; Summers, D. J.; Nguyen, X.; Taras, P.; de Nardo, G.; Monorchio, D.; Onorato, G.; Sciacca, C.; Raven, G.; Snoek, H. L.; Jessop, C. P.; Knoepfel, K. J.; Losecco, J. M.; Wang, W. F.; Honscheid, K.; Kass, R.; Brau, J.; Frey, R.; Sinev, N. B.; Strom, D.; Torrence, E.; Feltresi, E.; Gagliardi, N.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simonetto, F.; Stroili, R.; Ben-Haim, E.; Bomben, M.; Bonneaud, G. R.; Briand, H.; Calderini, G.; Chauveau, J.; Hamon, O.; Leruste, Ph.; Marchiori, G.; Ocariz, J.; Sitt, S.; Biasini, M.; Manoni, E.; Rossi, A.; Angelini, C.; Batignani, G.; Bettarini, S.; Carpinelli, M.; Casarosa, G.; Cervelli, A.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Neri, N.; Oberhof, B.; Paoloni, E.; Perez, A.; Rizzo, G.; Walsh, J. J.; Lopes Pegna, D.; Lu, C.; Olsen, J.; Smith, A. J. S.; Telnov, A. V.; Anulli, F.; Cavoto, G.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Li Gioi, L.; Mazzoni, M. A.; Piredda, G.; Buenger, C.; Hartmann, T.; Leddig, T.; Schröder, H.; Waldi, R.; Adye, T.; Olaiya, E. O.; Wilson, F. F.; Emery, S.; Hamel de Monchenault, G.; Vasseur, G.; Yèche, Ch.; Aston, D.; Bard, D. J.; Bartoldus, R.; Benitez, J. F.; Cartaro, C.; Convery, M. R.; Dorfan, J.; Dubois-Felsmann, G. P.; Dunwoodie, W.; Field, R. C.; Franco Sevilla, M.; Fulsom, B. G.; Gabareen, A. M.; Graham, M. T.; Grenier, P.; Hast, C.; Innes, W. R.; Kelsey, M. H.; Kim, H.; Kim, P.; Kocian, M. L.; Leith, D. W. G. S.; Lewis, P.; Li, S.; Lindquist, B.; Luitz, S.; Luth, V.; Lynch, H. L.; Macfarlane, D. B.; Muller, D. R.; Neal, H.; Nelson, S.; Ofte, I.; Perl, M.; Pulliam, T.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Santoro, V.; Schindler, R. H.; Snyder, A.; Su, D.; Sullivan, M. K.; Va'Vra, J.; Wagner, A. P.; Weaver, M.; Wisniewski, W. J.; Wittgen, M.; Wright, D. H.; Wulsin, H. W.; Yarritu, A. K.; Young, C. C.; Ziegler, V.; Park, W.; Purohit, M. V.; White, R. M.; Wilson, J. R.; Randle-Conde, A.; Sekula, S. J.; Bellis, M.; Burchat, P. R.; Miyashita, T. S.; Alam, M. S.; Ernst, J. A.; Gorodeisky, R.; Guttman, N.; Peimer, D. R.; Soffer, A.; Lund, P.; Spanier, S. M.; Eckmann, R.; Ritchie, J. L.; Ruland, A. M.; Schilling, C. J.; Schwitters, R. F.; Wray, B. C.; Izen, J. M.; Lou, X. C.; Bianchi, F.; Gamba, D.; Lanceri, L.; Vitale, L.; Lopez-March, N.; Martinez-Vidal, F.; Oyanguren, A.; Ahmed, H.; Albert, J.; Banerjee, Sw.; Choi, H. H. F.; King, G. J.; Kowalewski, R.; Lewczuk, M. J.; Lindsay, C.; Nugent, I. M.; Roney, J. M.; Sobie, R. J.; Gershon, T. J.; Harrison, P. F.; Latham, T. E.; Puccio, E. M. T.; Band, H. R.; Dasu, S.; Pan, Y.; Prepost, R.; Vuosalo, C. O.; Wu, S. L.
2011-07-01
We present a study of the decays B±→DK± with D mesons reconstructed in the K+π-π0 or K-π+π0 final states, where D indicates a D0 or a D¯0 meson. Using a sample of 474×106 BB¯ pairs collected with the BABAR detector at the PEP-II asymmetric-energy e+e- collider at SLAC, we measure the ratios R±≡(Γ(B±→[K∓π±π0]DK±))/(Γ(B±→[K±π∓π0]DK±)). We obtain R+=(5-10+12(stat)-4+2(syst))×10-3 and R-=(12-10+12(stat)-5+3(syst))×10-3, from which we extract the upper limits at 90% probability: R+<23×10-3 and R-<29×10-3. Using these measurements, we obtain an upper limit for the ratio rB of the magnitudes of the b→u and b→c amplitudes rB<0.13 at 90% probability.
Banan, Zoya; Gernand, Jeremy M
2018-04-18
Shale gas has become an important strategic energy source with considerable potential economic benefits and the potential to reduce greenhouse gas emissions in so far as it displaces coal use. However, there still exist environmental health risks caused by emissions from exploration and production activities. In the United States, states and localities have set different minimum setback policies to reduce the health risks corresponding to the emissions from these locations, but it is unclear whether these policies are sufficient. This study uses a Gaussian plume model to evaluate the probability of exposure exceedance from EPA concentration limits for PM2.5 at various locations around a generic wellsite in the Marcellus shale region. A set of meteorological data monitored at ten different stations across Marcellus shale gas region in Pennsylvania during 2015 serves as an input to this model. Results indicate that even though the current setback distance policy in Pennsylvania (500 ft. or 152.4 m) might be effective in some cases, exposure limit exceedance occurs frequently at this distance with higher than average emission rates and/or greater number of wells per wellpad. Setback distances should be 736 m to ensure compliance with the daily average concentration of PM2.5, and a function of the number of wells to comply with the annual average PM2.5 exposure standard. The Marcellus Shale gas is known as a significant source of criteria pollutants and studies show that the current setback distance in Pennsylvania is not adequate to protect the residents from exceeding the established limits. Even an effective setback distance to meet the annual exposure limit may not be adequate to meet the daily limit. The probability of exceeding the annual limit increases with number of wells per site. We use a probabilistic dispersion model to introduce a technical basis to select appropriate setback distances.
Efficient teleportation between remote single-atom quantum memories.
Nölleke, Christian; Neuzner, Andreas; Reiserer, Andreas; Hahn, Carolin; Rempe, Gerhard; Ritter, Stephan
2013-04-05
We demonstrate teleportation of quantum bits between two single atoms in distant laboratories. Using a time-resolved photonic Bell-state measurement, we achieve a teleportation fidelity of (88.0 ± 1.5)%, largely determined by our entanglement fidelity. The low photon collection efficiency in free space is overcome by trapping each atom in an optical cavity. The resulting success probability of 0.1% is almost 5 orders of magnitude larger than in previous experiments with remote material qubits. It is mainly limited by photon propagation and detection losses and can be enhanced with a cavity-based deterministic Bell-state measurement.
Welfare reform, labor supply, and health insurance in the immigrant population.
Borjas, George J
2003-11-01
Although the 1996 welfare reform legislation limited the eligibility of immigrant households to receive assistance, many states chose to protect their immigrant populations by offering state-funded aid to these groups. I exploit these changes in eligibility rules to examine the link between the welfare cutbacks and health insurance coverage in the immigrant population. The data reveal that the cutbacks in the Medicaid program did not reduce health insurance coverage rates among targeted immigrants. The immigrants responded by increasing their labor supply, thereby raising the probability of being covered by employer-sponsored health insurance.
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuya J.; Sughiyama, Yuki
2017-07-01
Adaptation in a fluctuating environment is a process of fueling environmental information to gain fitness. Living systems have gradually developed strategies for adaptation from random and passive diversification of the phenotype to more proactive decision making, in which environmental information is sensed and exploited more actively and effectively. Understanding the fundamental relation between fitness and information is therefore crucial to clarify the limits and universal properties of adaptation. In this work, we elucidate the underlying stochastic and information-thermodynamic structure in this process, by deriving causal fluctuation relations (FRs) of fitness and information. Combined with a duality between phenotypic and environmental dynamics, the FRs reveal the limit of fitness gain, the relation of time reversibility with the achievability of the limit, and the possibility and condition for gaining excess fitness due to environmental fluctuation. The loss of fitness due to causal constraints and the limited capacity of real organisms is shown to be the difference between time-forward and time-backward path probabilities of phenotypic and environmental dynamics. Furthermore, the FRs generalize the concept of the evolutionary stable state (ESS) for fluctuating environment by giving the probability that the optimal strategy on average can be invaded by a suboptimal one owing to rare environmental fluctuation. These results clarify the information-thermodynamic structures in adaptation and evolution.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
A Framework to Understand Extreme Space Weather Event Probability.
Jonas, Seth; Fronczyk, Kassandra; Pratt, Lucas M
2018-03-12
An extreme space weather event has the potential to disrupt or damage infrastructure systems and technologies that many societies rely on for economic and social well-being. Space weather events occur regularly, but extreme events are less frequent, with a small number of historical examples over the last 160 years. During the past decade, published works have (1) examined the physical characteristics of the extreme historical events and (2) discussed the probability or return rate of select extreme geomagnetic disturbances, including the 1859 Carrington event. Here we present initial findings on a unified framework approach to visualize space weather event probability, using a Bayesian model average, in the context of historical extreme events. We present disturbance storm time (Dst) probability (a proxy for geomagnetic disturbance intensity) across multiple return periods and discuss parameters of interest to policymakers and planners in the context of past extreme space weather events. We discuss the current state of these analyses, their utility to policymakers and planners, the current limitations when compared to other hazards, and several gaps that need to be filled to enhance space weather risk assessments. © 2018 Society for Risk Analysis.
Synchrony in Joint Action Is Directed by Each Participant’s Motor Control System
Noy, Lior; Weiser, Netta; Friedman, Jason
2017-01-01
In this work, we ask how the probability of achieving synchrony in joint action is affected by the choice of motion parameters of each individual. We use the mirror game paradigm to study how changes in leader’s motion parameters, specifically frequency and peak velocity, affect the probability of entering the state of co-confidence (CC) motion: a dyadic state of synchronized, smooth and co-predictive motions. In order to systematically study this question, we used a one-person version of the mirror game, where the participant mirrored piece-wise rhythmic movements produced by a computer on a graphics tablet. We systematically varied the frequency and peak velocity of the movements to determine how these parameters affect the likelihood of synchronized joint action. To assess synchrony in the mirror game we used the previously developed marker of co-confident (CC) motions: smooth, jitter-less and synchronized motions indicative of co-predicative control. We found that when mirroring movements with low frequencies (i.e., long duration movements), the participants never showed CC, and as the frequency of the stimuli increased, the probability of observing CC also increased. This finding is discussed in the framework of motor control studies showing an upper limit on the duration of smooth motion. We confirmed the relationship between motion parameters and the probability to perform CC with three sets of data of open-ended two-player mirror games. These findings demonstrate that when performing movements together, there are optimal movement frequencies to use in order to maximize the possibility of entering a state of synchronized joint action. It also shows that the ability to perform synchronized joint action is constrained by the properties of our motor control systems. PMID:28443047
Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain
Dai, Yonghui; Han, Dongmei; Dai, Weihui
2014-01-01
The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659
Decision analysis with approximate probabilities
NASA Technical Reports Server (NTRS)
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
Defense Conversion Redirecting R and D
1993-05-01
agree that maglev or high members aerospace companies, utilities, univer- speed rail systems are probably limited to a few sities, small high tech...200 years. Even maglev a 3-year period for France’s TGV with a manufac- trains, long the favorite technology of the future turing workforce for the...population density. Maglev might parts of the United States, but on the basis of the contribute to the advance of some technologies, preliminary
NASA Astrophysics Data System (ADS)
Poddubny, Alexander N.; Sukhorukov, Andrey A.
2015-09-01
The practical development of quantum plasmonic circuits incorporating non-classical interference [1] and sources of entangled states calls for a versatile quantum theoretical framework which can fully describe the generation and detection of entangled photons and plasmons. However, majority of the presently used theoretical approaches are typically limited to the toy models assuming loss-less and nondispersive elements or including just a few resonant modes. Here, we present a rigorous Green function approach describing entangled photon-plasmon state generation through spontaneous wave mixing in realistic metal-dielectric nanostructures. Our approach is based on the local Huttner-Barnett quantization scheme [2], which enables problem formulation in terms of a Hermitian Hamiltonian where the losses and dispersion are fully encoded in the electromagnetic Green functions. Hence, the problem can be addressed by the standard quantum mechanical perturbation theory, overcoming mathematical difficulties associated with other quantization schemes. We derive explicit expressions with clear physical meaning for the spatially dependent two-photon detection probability, single-photon detection probability and single-photon density matrix. In the limiting case of low-loss nondispersive waveguides our approach reproduces the previous results [3,4]. Importantly, our technique is far more general and can quantitatively describe generation and detection of spatially-entangled photons in arbitrary metal-dielectric structures taking into account actual losses and dispersion. This is essential to perform the design and optimization of plasmonic structures for generation and control of quantum entangled states. [1] J.S. Fakonas, H. Lee, Y.A. Kelaita and H.A. Atwater, Nature Photonics 8, 317(2014) [2] W. Vogel and D.-G. Welsch, Quantum Optics, Wiley (2006). [3] D.A. Antonosyan, A.S. Solntsev and A.A. Sukhorukov, Phys. Rev. A 90 043845 (2014) [4] L.-G. Helt, J.E. Sipe and M.J. Steel, arXiv: 1407.4219
Tveito, Torill H.; Reme, Silje E.; Eriksen, Hege R.
2017-01-01
Background Disability benefits and sick leave benefits represents huge costs in western countries. The pathways and prognostic factors for receiving these benefits seen in recent years are complex and manifold. We postulate that mental health and IQ, both alone and concurrent, influence subsequent employment status, disability benefits and mortality. Methods A cohort of 918 888 Norwegian men was followed for 16 years from the age of 20 to 55. Risk for health benefits, emigration, and mortality were studied. Indicators of mental health and IQ at military enrolment were used as potential risk factors. Multi-state models were used to analyze transitions between employment, sick leave, time limited benefits, disability benefits, emigration, and mortality. Results During follow up, there were a total of 3 908 397 transitions between employment and different health benefits, plus 12 607 deaths. Men with low IQ (below 85), without any mental health problems at military enrolment, had an increased probability of receiving disability benefits before the age of 35 (HRR = 4.06, 95% CI: 3.88–4.26) compared to men with average IQ (85 to 115) and no mental health problems. For men with both low IQ and mental health problems, there was an excessive probability of receiving disability benefits before the age of 35 (HRR = 14.37, 95% CI: 13.59–15.19), as well as an increased probability for time limited benefits and death before the age of 35 compared to men with average IQ (85 to 115) and no mental health problems. Conclusion Low IQ and mental health problems are strong predictors of future disability benefits and early mortality for young men. PMID:28683088
Lie, Stein Atle; Tveito, Torill H; Reme, Silje E; Eriksen, Hege R
2017-01-01
Disability benefits and sick leave benefits represents huge costs in western countries. The pathways and prognostic factors for receiving these benefits seen in recent years are complex and manifold. We postulate that mental health and IQ, both alone and concurrent, influence subsequent employment status, disability benefits and mortality. A cohort of 918 888 Norwegian men was followed for 16 years from the age of 20 to 55. Risk for health benefits, emigration, and mortality were studied. Indicators of mental health and IQ at military enrolment were used as potential risk factors. Multi-state models were used to analyze transitions between employment, sick leave, time limited benefits, disability benefits, emigration, and mortality. During follow up, there were a total of 3 908 397 transitions between employment and different health benefits, plus 12 607 deaths. Men with low IQ (below 85), without any mental health problems at military enrolment, had an increased probability of receiving disability benefits before the age of 35 (HRR = 4.06, 95% CI: 3.88-4.26) compared to men with average IQ (85 to 115) and no mental health problems. For men with both low IQ and mental health problems, there was an excessive probability of receiving disability benefits before the age of 35 (HRR = 14.37, 95% CI: 13.59-15.19), as well as an increased probability for time limited benefits and death before the age of 35 compared to men with average IQ (85 to 115) and no mental health problems. Low IQ and mental health problems are strong predictors of future disability benefits and early mortality for young men.
New method for calculations of nanostructure kinetic stability at high temperature
NASA Astrophysics Data System (ADS)
Fedorov, A. S.; Kuzubov, A. A.; Visotin, M. A.; Tomilin, F. N.
2017-10-01
A new universal method is developed for determination of nanostructure kinetic stability (KS) at high temperatures, when nanostructures can be destroyed by chemical bonds breaking due to atom thermal vibrations. The method is based on calculation of probability for any bond in the structure to stretch more than a limit value Lmax, when the bond breaks. Assuming the number of vibrations is very large and all of them are independent, using the central limit theorem, an expression for the probability of a given bond elongation up to Lmax is derived in order to determine the KS. It is shown that this expression leads to the effective Arrhenius formula, but unlike the standard transition state theory it allows one to find the contributions of different vibrations to a chemical bond cleavage. To determine the KS, only calculation of frequencies and eigenvectors of vibrational modes in the groundstate of the nanostructure is needed, while the transition states need not be found. The suggested method was tested on calculating KS of bonds in some alkanes, octene isomers and narrow graphene nanoribbons of different types and widths at the temperature T=1200 K. The probability of breaking of the C-C bond in the center of these hydrocarbons is found to be significantly higher than at the ends of the molecules. It is also shown that the KS of the octene isomers decreases when the double C˭C bond is moved to the end of the molecule, which agrees well with the experimental data. The KS of the narrowest graphene nanoribbons of different types varies by 1-2 orders of magnitude depending on the width and structure, while all of them are by several orders of magnitude less stable at high temperature than the hydrocarbons and benzene.
Electron transfer by excited benzoquinone anions: slow rates for two-electron transitions.
Zamadar, Matibur; Cook, Andrew R; Lewandowska-Andralojc, Anna; Holroyd, Richard; Jiang, Yan; Bikalis, Jin; Miller, John R
2013-09-05
Electron transfer (ET) rate constants from the lowest excited state of the radical anion of benzoquinone, BQ(-•)*, were measured in THF solution. Rate constants for bimolecular electron transfer reactions typically reach the diffusion-controlled limit when the free-energy change, ΔG°, reaches -0.3 eV. The rate constants for ET from BQ(-•)* are one-to-two decades smaller at this energy and do not reach the diffusion-controlled limit until -ΔG° is 1.5-2.0 eV. The rates are so slow probably because a second electron must also undergo a transition to make use of the energy of the excited state. Similarly, ET, from solvated electrons to neutral BQ to form the lowest excited state, is slow, while fast ET is observed at a higher excited state, which can be populated in a transition involving only one electron. A simple picture based on perturbation theory can roughly account for the control of electron transfer by the need for transition of a second electron. The picture also explains how extra driving force (-ΔG°) can restore fast rates of electron transfer.
NASA Astrophysics Data System (ADS)
Fan, Tai-Fang
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magneto - Optical Imaging of Superconducting MgB2 Thin Films
NASA Astrophysics Data System (ADS)
Hummert, Stephanie Maria
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Open Markov Processes and Reaction Networks
NASA Astrophysics Data System (ADS)
Swistock Pollard, Blake Stephen
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Boron Carbide Filled Neutron Shielding Textile Polymers
NASA Astrophysics Data System (ADS)
Manzlak, Derrick Anthony
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations
NASA Astrophysics Data System (ADS)
Zagaris, George
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Schiavone, Clinton Cleveland
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Processing and Conversion of Algae to Bioethanol
NASA Astrophysics Data System (ADS)
Kampfe, Sara Katherine
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
The Development of the CALIPSO LiDAR Simulator
NASA Astrophysics Data System (ADS)
Powell, Kathleen A.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Peeke, Richard Scot
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Scully, Malcolm E.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials
NASA Astrophysics Data System (ADS)
Bate, Norah G.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding
NASA Astrophysics Data System (ADS)
Collins, Brittani May
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers
NASA Astrophysics Data System (ADS)
Petersen, Andreas
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Vitanov, Nikolay V.
2018-05-01
In the experimental determination of the population transfer efficiency between discrete states of a coherently driven quantum system it is often inconvenient to measure the population of the target state. Instead, after the interaction that transfers the population from the initial state to the target state, a second interaction is applied which brings the system back to the initial state, the population of which is easy to measure and normalize. If the transition probability is p in the forward process, then classical intuition suggests that the probability to return to the initial state after the backward process should be p2. However, this classical expectation is generally misleading because it neglects interference effects. This paper presents a rigorous theoretical analysis based on the SU(2) and SU(3) symmetries of the propagators describing the evolution of quantum systems with two and three states, resulting in explicit analytic formulas that link the two-step probabilities to the single-step ones. Explicit examples are given with the popular techniques of rapid adiabatic passage and stimulated Raman adiabatic passage. The present results suggest that quantum-mechanical probabilities degrade faster in repeated processes than classical probabilities. Therefore, the actual single-pass efficiencies in various experiments, calculated from double-pass probabilities, might have been greater than the reported values.
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.
2009-01-01
Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.
NASA Astrophysics Data System (ADS)
Zhou, Dan; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue
2018-03-01
In this work, we calculate the potential energy curves of 16 Λ-S and 36 Ω states of beryllium boride (BeB) radical using the complete active space self-consistent field method, followed by the valence internally contracted multireference configuration interaction approach with Davidson correction. The 16 Λ-S states are the X2Π, A2Σ+, B2Π, C2Δ, D2Ʃ-, E2Σ+, G2Π, I2Σ+, a4Σ-, b4Π, c4Σ-, d4Δ, e4Σ+, g4Π, h4Π, and 24Σ+, which are obtained from the first three dissociation channels of the BeB radical. The Ω states are obtained from the Λ-S states. Of the Λ-S states, the G2Π, I2Σ+, and h4Π states exhibit double well curves. The G2Π, b4Π, and g4Π states are inverted with the spin-orbit coupling effect included. The d4Δ, e4Σ+, and g4Π states as well as the second well of the h4Π state are very weakly bound. Avoided crossings exist between the G2Π and H2Π states, the A2Σ+ and E2Σ+ states, the c4Σ- and f4Σ- states, the g4Π and h4Π states, the I2Σ+ and 42Σ+ states, as well as the 24Σ+ and 34Σ+ states. To improve the quality of the potential energy curves, core-valence correlation and scalar relativistic corrections, as well as the extrapolation of the potential energies to the complete basis set limit, are included. The transition dipole moments are computed. Spectroscopic parameters and vibrational levels are determined along with Franck-Condon factors, Einstein coefficients, and radiative lifetimes of many electronic transitions. The transition probabilities are evaluated. The spin-orbit coupling effect on the spectroscopic parameters and vibrational levels is discussed. The spectroscopic parameters, vibrational levels, and transition probabilities reported in this paper can be considered very reliable and can be employed to predict these states in an appropriate spectroscopy experiment.
Recoil distance lifetime measurements in82Kr
NASA Astrophysics Data System (ADS)
Brüssermann, S.; Keinonen, J.; Hellmeister, H. P.; Lieb, K. P.
1982-12-01
The lifetimes τ=124±12, 6{-2/+4} and 380±100 ps of the E x ( I π )=3.46(8+), 2.92(6+) and 3.04(6-) MeV states, respectively, populated by the reaction76Ge(12C, α2 n) were measured with the recoil distance method. In addition upper lifetime limits were obtained for nine states. The measured lifetimes and energies indicate a band crossing at about I π =8+, probably arising from the alignment of two g 9/2 neutrons. For the 3.04 MeV 6- state as a second member of a band built on the 2.65 MeV 4- state the measured lifetime points to a two-quasiparticle configuration. The positive-parity states have been discussed in the frame of the interacting boson approximation, nuclear field theory and the cranked shell model.
Evolution of probability densities in stochastic coupled map lattices
NASA Astrophysics Data System (ADS)
Losson, Jérôme; Mackey, Michael C.
1995-08-01
This paper describes the statistical properties of coupled map lattices subjected to the influence of stochastic perturbations. The stochastic analog of the Perron-Frobenius operator is derived for various types of noise. When the local dynamics satisfy rather mild conditions, this equation is shown to possess either stable, steady state solutions (i.e., a stable invariant density) or density limit cycles. Convergence of the phase space densities to these limit cycle solutions explains the nonstationary behavior of statistical quantifiers at equilibrium. Numerical experiments performed on various lattices of tent, logistic, and shift maps with diffusivelike interelement couplings are examined in light of these theoretical results.
Atiyeh, B.; Masellis, A.; Conte, F.
2010-01-01
Summary The present review of the literature aims at analysing the challenges facing burn management in low- and middleincome countries and exploring probable modalities to optimize burn management in these countries. In Part I, epidemiology of burns injuries and the formidable challenges for proper management due to limited resources and inaccessibility to sophisticated skills and technologies in low- and middle income countries (LMICs) were presented. Part II discussed the actual state of burn injuries management in LMICs. In Part III of this review strategies for proper prevention and burn care in LMICs will be presented. PMID:21991190
Migration Intentions and Illicit Substance Use among Youth in Central Mexico
Marsiglia, Flavio Francisco; Kulis, Stephen; Hoffman, Steven; Calderón-Tena, Carlos Orestes; Becerra, David; Alvarez, Diana
2011-01-01
This study explored intentions to emigrate and substance use among youth (ages 14–24) from a central Mexico state with high emigration rates. Questionnaires were completed in 2007 by 702 students attending a probability sample of alternative secondary schools serving remote or poor communities. Linear and logistic regression analyses indicated that stronger intentions to emigrate predicted greater access to drugs, drug offers, and use of illicit drugs (marijuana, cocaine, inhalants), but not alcohol or cigarettes. Results are related to the healthy migrant theory and its applicability to youth with limited educational opportunities. The study’s limitations are noted. PMID:21955065
NASA Astrophysics Data System (ADS)
Hu, Peigang; Jin, Yaohui; Zhang, Chunlei; He, Hao; Hu, WeiSheng
2005-02-01
The increasing switching capacity brings the optical node with considerable complexity. Due to the limitation in cost and technology, an optical node is often designed with partial switching capability and partial resource sharing. It means that the node is of blocking to some extent, for example multi-granularity switching node, which in fact is a structure using pass wavelength to reduce the dimension of OXC, and partial sharing wavelength converter (WC) OXC. It is conceivable that these blocking nodes will have great effects on the problem of routing and wavelength assignment. Some previous works studied the blocking case, partial WC OXC, using complicated wavelength assignment algorithm. But the complexities of these schemes decide them to be not in practice in real networks. In this paper, we propose a new scheme based on the node blocking state advertisement to reduce the retry or rerouting probability and improve the efficiency of routing in the networks with blocking nodes. In the scheme, node blocking state are advertised to the other nodes in networks, which will be used for subsequent route calculation to find a path with lowest blocking probability. The performance of the scheme is evaluated using discrete event model in 14-node NSFNET, all the nodes of which employ a kind of partial sharing WC OXC structure. In the simulation, a simple First-Fit wavelength assignment algorithm is used. The simulation results demonstrate that the new scheme considerably reduces the retry or rerouting probability in routing process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Bret, E-mail: jackson@chem.umass.edu; Nattino, Francesco; Kroes, Geert-Jan
The dissociative chemisorption of methane on metal surfaces is of great practical and fundamental importance. Not only is it the rate-limiting step in the steam reforming of natural gas, the reaction exhibits interesting mode-selective behavior and a strong dependence on the temperature of the metal. We present a quantum model for this reaction on Ni(100) and Ni(111) surfaces based on the reaction path Hamiltonian. The dissociative sticking probabilities computed using this model agree well with available experimental data with regard to variation with incident energy, substrate temperature, and the vibrational state of the incident molecule. We significantly expand the vibrationalmore » basis set relative to earlier studies, which allows reaction probabilities to be calculated for doubly excited initial vibrational states, though it does not lead to appreciable changes in the reaction probabilities for singly excited initial states. Sudden models used to treat the center of mass motion parallel to the surface are compared with results from ab initio molecular dynamics and found to be reasonable. Similar comparisons for molecular rotation suggest that our rotationally adiabatic model is incorrect, and that sudden behavior is closer to reality. Such a model is proposed and tested. A model for predicting mode-selective behavior is tested, with mixed results, though we find it is consistent with experimental studies of normal vs. total (kinetic) energy scaling. Models for energy transfer into lattice vibrations are also examined.« less
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; ...
2017-06-09
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order tomore » achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.« less
NASA Astrophysics Data System (ADS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Savara, Aditya
2017-10-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of "KMC stiffness" (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps/CPU time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events-allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm is designed for use in achieving and simulating steady-state conditions in KMC simulations. As shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
Return probability after a quench from a domain wall initial state in the spin-1/2 XXZ chain
NASA Astrophysics Data System (ADS)
Stéphan, Jean-Marie
2017-10-01
We study the return probability and its imaginary (τ) time continuation after a quench from a domain wall initial state in the XXZ spin chain, focusing mainly on the region with anisotropy \\vert Δ\\vert < 1 . We establish exact Fredholm determinant formulas for those, by exploiting a connection to the six-vertex model with domain wall boundary conditions. In imaginary time, we find the expected scaling for a partition function of a statistical mechanical model of area proportional to τ2 , which reflects the fact that the model exhibits the limit shape phenomenon. In real time, we observe that in the region \\vert Δ\\vert <1 the decay for long time t is nowhere continuous as a function of anisotropy: it is Gaussian at roots of unity and exponential otherwise. We also determine that the front moves as x_f(t)=t\\sqrt{1-Δ^2} , by the analytic continuation of known arctic curves in the six-vertex model. Exactly at \\vert Δ\\vert =1 , we find the return probability decays as e-\\zeta(3/2) \\sqrt{t/π}t1/2O(1) . It is argued that this result provides an upper bound on spin transport. In particular, it suggests that transport should be diffusive at the isotropic point for this quench.
The Dolinar Receiver in an Information Theoretic Framework
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Birnbaum, Kevin M.; Moision, Bruce E.; Dolinar, Samuel J.
2011-01-01
Optical communication at the quantum limit requires that measurements on the optical field be maximally informative, but devising physical measurements that accomplish this objective has proven challenging. The Dolinar receiver exemplifies a rare instance of success in distinguishing between two coherent states: an adaptive local oscillator is mixed with the signal prior to photodetection, which yields an error probability that meets the Helstrom lower bound with equality. Here we apply the same local-oscillator-based architecture with aninformation-theoretic optimization criterion. We begin with analysis of this receiver in a general framework for an arbitrary coherent-state modulation alphabet, and then we concentrate on two relevant examples. First, we study a binary antipodal alphabet and show that the Dolinar receiver's feedback function not only minimizes the probability of error, but also maximizes the mutual information. Next, we study ternary modulation consistingof antipodal coherent states and the vacuum state. We derive an analytic expression for a near-optimal local oscillator feedback function, and, via simulation, we determine its photon information efficiency (PIE). We provide the PIE versus dimensional information efficiency (DIE) trade-off curve and show that this modulation and the our receiver combination performs universally better than (generalized) on-off keying plus photoncounting, although, the advantage asymptotically vanishes as the bits-per-photon diverges towards infinity.
Transition probabilities in neutron-rich Se,8684
NASA Astrophysics Data System (ADS)
Litzinger, J.; Blazhev, A.; Dewald, A.; Didierjean, F.; Duchêne, G.; Fransen, C.; Lozeva, R.; Sieja, K.; Verney, D.; de Angelis, G.; Bazzacco, D.; Birkenbach, B.; Bottoni, S.; Bracco, A.; Braunroth, T.; Cederwall, B.; Corradi, L.; Crespi, F. C. L.; Désesquelles, P.; Eberth, J.; Ellinger, E.; Farnea, E.; Fioretto, E.; Gernhäuser, R.; Goasduff, A.; Görgen, A.; Gottardo, A.; Grebosz, J.; Hackstein, M.; Hess, H.; Ibrahim, F.; Jolie, J.; Jungclaus, A.; Kolos, K.; Korten, W.; Leoni, S.; Lunardi, S.; Maj, A.; Menegazzo, R.; Mengoni, D.; Michelagnoli, C.; Mijatovic, T.; Million, B.; Möller, O.; Modamio, V.; Montagnoli, G.; Montanari, D.; Morales, A. I.; Napoli, D. R.; Niikura, M.; Pollarolo, G.; Pullia, A.; Quintana, B.; Recchia, F.; Reiter, P.; Rosso, D.; Sahin, E.; Salsac, M. D.; Scarlassara, F.; Söderström, P.-A.; Stefanini, A. M.; Stezowski, O.; Szilner, S.; Theisen, Ch.; Valiente Dobón, J. J.; Vandone, V.; Vogt, A.
2015-12-01
Reduced quadrupole transition probabilities for low-lying transitions in neutron-rich Se,8684 are investigated with a recoil distance Doppler shift (RDDS) experiment. The experiment was performed at the Istituto Nazionale di Fisica Nucleare (INFN) Laboratori Nazionali di Legnaro using the Cologne Plunger device for the RDDS technique and the AGATA Demonstrator array for the γ -ray detection coupled to the PRISMA magnetic spectrometer for an event-by-event particle identification. In 86Se the level lifetime of the yrast 21+ state and an upper limit for the lifetime of the 41+ state are determined for the first time. The results of 86Se are in agreement with previously reported predictions of large-scale shell-model calculations using Ni78-I and Ni78-II effective interactions. In addition, intrinsic shape parameters of lowest yrast states in 86Se are calculated. In semimagic 84Se level lifetimes of the yrast 41+ and 61+ states are determined for the first time. Large-scale shell-model calculations using effective interactions Ni78-II, JUN45, jj4b, and jj4pna are performed. The calculations describe B (E 2 ;21+→01+) and B (E 2 ;61+→41+) fairly well and point out problems in reproducing the experimental B (E 2 ;41+→21+) .
NASA Astrophysics Data System (ADS)
Glasser, Ryan T.; Cable, Hugo; Dowling, Jonathan P.; de Martini, Francesco; Sciarrino, Fabio; Vitelli, Chiara
2008-07-01
The study of optical parametric amplifiers (OPAs) has been successful in describing and creating nonclassical light for use in fields such as quantum metrology and quantum lithography [Agarwal , J. Opt. Soc. Am. B 24, 2 (2007)]. In this paper we present the theory of an OPA scheme utilizing an entangled state input. The scheme involves two identical OPAs seeded with the maximally path-entangled ∣N00N⟩ state (∣2,0⟩+∣0,2⟩)/2 . The stimulated amplification results in output state probability amplitudes that have a dependence on the number of photons in each mode, which differs greatly from two-mode squeezed vacuum. A large family of entangled output states are found. Specific output states allow for the heralded creation of N=4 N00N states, which may be used for quantum lithography, to write sub-Rayleigh fringe patterns, and for quantum interferometry, to achieve Heisenberg-limited phase measurement sensitivity.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
Quantum Probability Cancellation Due to a Single-Photon State
NASA Technical Reports Server (NTRS)
Ou, Z. Y.
1996-01-01
When an N-photon state enters a lossless symmetric beamsplitter from one input port, the photon distribution for the two output ports has the form of Bernouli Binormial, with highest probability at equal partition (N/2 at one outport and N/2 at the other). However, injection of a single photon state at the other input port can dramatically change the photon distribution at the outputs, resulting in zero probability at equal partition. Such a strong deviation from classical particle theory stems from quantum probability amplitude cancellation. The effect persists even if the N-photon state is replaced by an arbitrary state of light. A special case is the coherent state which corresponds to homodyne detection of a single photon state and can lead to the measurement of the wave function of a single photon state.
Search times and probability of detection in time-limited search
NASA Astrophysics Data System (ADS)
Wilson, David; Devitt, Nicole; Maurer, Tana
2005-05-01
When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.
Small-world networks exhibit pronounced intermittent synchronization
NASA Astrophysics Data System (ADS)
Choudhary, Anshul; Mitra, Chiranjit; Kohar, Vivek; Sinha, Sudeshna; Kurths, Jürgen
2017-11-01
We report the phenomenon of temporally intermittently synchronized and desynchronized dynamics in Watts-Strogatz networks of chaotic Rössler oscillators. We consider topologies for which the master stability function (MSF) predicts stable synchronized behaviour, as the rewiring probability (p) is tuned from 0 to 1. MSF essentially utilizes the largest non-zero Lyapunov exponent transversal to the synchronization manifold in making stability considerations, thereby ignoring the other Lyapunov exponents. However, for an N-node networked dynamical system, we observe that the difference in its Lyapunov spectra (corresponding to the N - 1 directions transversal to the synchronization manifold) is crucial and serves as an indicator of the presence of intermittently synchronized behaviour. In addition to the linear stability-based (MSF) analysis, we further provide global stability estimate in terms of the fraction of state-space volume shared by the intermittently synchronized state, as p is varied from 0 to 1. This fraction becomes appreciably large in the small-world regime, which is surprising, since this limit has been otherwise considered optimal for synchronized dynamics. Finally, we characterize the nature of the observed intermittency and its dominance in state-space as network rewiring probability (p) is varied.
Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet
2016-12-01
Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License
Constraints on large extra dimensions from the MINOS Experiment
Adamson, P.
2016-12-16
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab’s Neutrinos at the Main Injector beam exposure of 10.56 ×10 20 protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. Themore » ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 μm at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Finally, stronger limits are obtained for nonvanishing masses.« less
On Volterra quadratic stochastic operators with continual state space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganikhodjaev, Nasir; Hamzah, Nur Zatul Akmar
2015-05-15
Let (X,F) be a measurable space, and S(X,F) be the set of all probability measures on (X,F) where X is a state space and F is σ - algebraon X. We consider a nonlinear transformation (quadratic stochastic operator) defined by (Vλ)(A) = ∫{sub X}∫{sub X}P(x,y,A)dλ(x)dλ(y), where P(x, y, A) is regarded as a function of two variables x and y with fixed A ∈ F . A quadratic stochastic operator V is called a regular, if for any initial measure the strong limit lim{sub n→∞} V{sup n }(λ) is exists. In this paper, we construct a family of quadratic stochastic operators defined on themore » segment X = [0,1] with Borel σ - algebra F on X , prove their regularity and show that the limit measure is a Dirac measure.« less
Constraints on large extra dimensions from the MINOS experiment
NASA Astrophysics Data System (ADS)
Adamson, P.; Anghel, I.; Aurisano, A.; Barr, G.; Bishai, M.; Blake, A.; Bock, G. J.; Bogert, D.; Cao, S. V.; Carroll, T. J.; Castromonte, C. M.; Chen, R.; Childress, S.; Coelho, J. A. B.; Corwin, L.; Cronin-Hennessy, D.; de Jong, J. K.; de Rijck, S.; Devan, A. V.; Devenish, N. E.; Diwan, M. V.; Escobar, C. O.; Evans, J. J.; Falk, E.; Feldman, G. J.; Flanagan, W.; Frohne, M. V.; Gabrielyan, M.; Gallagher, H. R.; Germani, S.; Gomes, R. A.; Goodman, M. C.; Gouffon, P.; Graf, N.; Gran, R.; Grzelak, K.; Habig, A.; Hahn, S. R.; Hartnell, J.; Hatcher, R.; Holin, A.; Huang, J.; Hylen, J.; Irwin, G. M.; Isvan, Z.; James, C.; Jensen, D.; Kafka, T.; Kasahara, S. M. S.; Koizumi, G.; Kordosky, M.; Kreymer, A.; Lang, K.; Ling, J.; Litchfield, P. J.; Lucas, P.; Mann, W. A.; Marshak, M. L.; Mayer, N.; McGivern, C.; Medeiros, M. M.; Mehdiyev, R.; Meier, J. R.; Messier, M. D.; Miller, W. H.; Mishra, S. R.; Moed Sher, S.; Moore, C. D.; Mualem, L.; Musser, J.; Naples, D.; Nelson, J. K.; Newman, H. B.; Nichol, R. J.; Nowak, J. A.; O'Connor, J.; Orchanian, M.; Pahlka, R. B.; Paley, J.; Patterson, R. B.; Pawloski, G.; Perch, A.; Pfützner, M. M.; Phan, D. D.; Phan-Budd, S.; Plunkett, R. K.; Poonthottathil, N.; Qiu, X.; Radovic, A.; Rebel, B.; Rosenfeld, C.; Rubin, H. A.; Sail, P.; Sanchez, M. C.; Schneps, J.; Schreckenberger, A.; Schreiner, P.; Sharma, R.; Sousa, A.; Tagg, N.; Talaga, R. L.; Thomas, J.; Thomson, M. A.; Tian, X.; Timmons, A.; Todd, J.; Tognini, S. C.; Toner, R.; Torretta, D.; Tzanakos, G.; Urheim, J.; Vahle, P.; Viren, B.; Weber, A.; Webb, R. C.; White, C.; Whitehead, L.; Whitehead, L. H.; Wojcicki, S. G.; Zwaska, R.; Minos Collaboration
2016-12-01
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab's Neutrinos at the Main Injector beam exposure of 10.56 ×1 020 protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. The ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 μ m at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Stronger limits are obtained for nonvanishing masses.
True detection limits in an experimental linearly heteroscedastic system. Part 1
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.
Thoma, Marie E; Gray, Ronald H; Kiwanuka, Noah; Aluma, Simon; Wang, Mei-Cheng; Sewankambo, Nelson; Wawer, Maria J
2011-02-01
Studies evaluating clinical and behavioral factors related to short-term fluctuations in vaginal microbiota are limited. We sought to describe changes in vaginal microbiota evaluated by Gram stain and assess factors associated with progression to and resolution of bacterial vaginosis (BV) at weekly intervals. A cohort of 255 sexually experienced, postmenarcheal women provided self-collected vaginal swabs to assess vaginal microbiota by Nugent score criteria at weekly visits for up to 2 years contributing 16,757 sequential observations. Absolute differences in Nugent scores (0-10) and transition probabilities of vaginal microbiota states classified by Nugent score into normal (0-3), intermediate (4-6), and BV (7-10) between visits were estimated. Allowing each woman to serve as her own control, weekly time-varying factors associated with progression from normal microbiota to BV and resolution of BV to normal microbiota were estimated using conditional logistic regression. The distribution of absolute difference in Nugent scores was fairly symmetric with a mode of 0 (no change) and a standard deviation of 2.64. Transition probabilities showed weekly persistence, was highest for normal (76.1%) and BV (73.6%) states; whereas, intermediate states had similar probabilities of progression (36.6%), resolution (36.0%), and persistence (27.4%). Weekly fluctuation between normal and BV states was associated with menstrual cycle phase, recency of sex, treatment for vaginal symptoms, pregnancy, and prior Nugent score. Weekly changes in vaginal microbiota were common in this population. Clinical and behavioral characteristics were associated with vaginal microbiota transitioning, which may be used to inform future studies and clinical management of BV.
Study on Effects of the Stochastic Delay Probability for 1d CA Model of Traffic Flow
NASA Astrophysics Data System (ADS)
Xue, Yu; Chen, Yan-Hong; Kong, Ling-Jiang
Considering the effects of different factors on the stochastic delay probability, the delay probability has been classified into three cases. The first case corresponding to the brake state has a large delay probability if the anticipant velocity is larger than the gap between the successive cars. The second one corresponding to the following-the-leader rule has intermediate delay probability if the anticipant velocity is equal to the gap. Finally, the third case is the acceleration, which has minimum delay probability. The fundamental diagram obtained by numerical simulation shows the different properties compared to that by the NaSch model, in which there exist two different regions, corresponding to the coexistence state, and jamming state respectively.
Direct detection of a single photon by humans
Tinsley, Jonathan N.; Molodtsov, Maxim I.; Prevedel, Robert; Wartmann, David; Espigulé-Pons, Jofre; Lauwers, Mattias; Vaziri, Alipasha
2016-01-01
Despite investigations for over 70 years, the absolute limits of human vision have remained unclear. Rod cells respond to individual photons, yet whether a single-photon incident on the eye can be perceived by a human subject has remained a fundamental open question. Here we report that humans can detect a single-photon incident on the cornea with a probability significantly above chance. This was achieved by implementing a combination of a psychophysics procedure with a quantum light source that can generate single-photon states of light. We further discover that the probability of reporting a single photon is modulated by the presence of an earlier photon, suggesting a priming process that temporarily enhances the effective gain of the visual system on the timescale of seconds. PMID:27434854
NASA Astrophysics Data System (ADS)
Auslander, Joseph Simcha
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Frey, Alexander
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Mountz, Elizabeth M.
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Abelard, Joshua Erold Robert
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
NASA Astrophysics Data System (ADS)
Harbert, Emily Grace
We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.
ERIC Educational Resources Information Center
van der Linden, Wim J.
2011-01-01
It is shown how the time limit on a test can be set to control the probability of a test taker running out of time before completing it. The probability is derived from the item parameters in the lognormal model for response times. Examples of curves representing the probability of running out of time on a test with given parameters as a function…
The Tightness of the Kesten-Stigum Reconstruction Bound of Symmetric Model with Multiple Mutations
NASA Astrophysics Data System (ADS)
Liu, Wenjian; Jammalamadaka, Sreenivasa Rao; Ning, Ning
2018-02-01
It is well known that reconstruction problems, as the interdisciplinary subject, have been studied in numerous contexts including statistical physics, information theory and computational biology, to name a few. We consider a 2 q-state symmetric model, with two categories of q states in each category, and 3 transition probabilities: the probability to remain in the same state, the probability to change states but remain in the same category, and the probability to change categories. We construct a nonlinear second-order dynamical system based on this model and show that the Kesten-Stigum reconstruction bound is not tight when q ≥ 4.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
Spence, Emma Suzuki; Beck, Jeffrey L; Gregory, Andrew J
2017-01-01
Greater sage-grouse (Centrocercus urophasianus) occupy sagebrush (Artemisia spp.) habitats in 11 western states and 2 Canadian provinces. In September 2015, the U.S. Fish and Wildlife Service announced the listing status for sage-grouse had changed from warranted but precluded to not warranted. The primary reason cited for this change of status was that the enactment of new regulatory mechanisms was sufficient to protect sage-grouse populations. One such plan is the 2008, Wyoming Sage Grouse Executive Order (SGEO), enacted by Governor Freudenthal. The SGEO identifies "Core Areas" that are to be protected by keeping them relatively free from further energy development and limiting other forms of anthropogenic disturbances near active sage-grouse leks. Using the Wyoming Game and Fish Department's sage-grouse lek count database and the Wyoming Oil and Gas Conservation Commission database of oil and gas well locations, we investigated the effectiveness of Wyoming's Core Areas, specifically: 1) how well Core Areas encompass the distribution of sage-grouse in Wyoming, 2) whether Core Area leks have a reduced probability of lek collapse, and 3) what, if any, edge effects intensification of oil and gas development adjacent to Core Areas may be having on Core Area populations. Core Areas contained 77% of male sage-grouse attending leks and 64% of active leks. Using Bayesian binomial probability analysis, we found an average 10.9% probability of lek collapse in Core Areas and an average 20.4% probability of lek collapse outside Core Areas. Using linear regression, we found development density outside Core Areas was related to the probability of lek collapse inside Core Areas. Specifically, probability of collapse among leks >4.83 km from inside Core Area boundaries was significantly related to well density within 1.61 km (1-mi) and 4.83 km (3-mi) outside of Core Area boundaries. Collectively, these data suggest that the Wyoming Core Area Strategy has benefited sage-grouse and sage-grouse habitat conservation; however, additional guidelines limiting development densities adjacent to Core Areas may be necessary to effectively protect Core Area populations.
Black holes are almost optimal quantum cloners
NASA Astrophysics Data System (ADS)
Adami, Christoph; Ver Steeg, Greg
2015-06-01
If black holes were able to clone quantum states, a number of paradoxes in black hole physics would disappear. However, the linearity of quantum mechanics forbids exact cloning of quantum states. Here we show that black holes indeed clone incoming quantum states with a fidelity that depends on the black hole’s absorption coefficient, without violating the no-cloning theorem because the clones are only approximate. Perfectly reflecting black holes are optimal universal ‘quantum cloning machines’ and operate on the principle of stimulated emission, exactly as their quantum optical counterparts. In the limit of perfect absorption, the fidelity of clones is only equal to what can be obtained via quantum state estimation methods. But for any absorption probability less than one, the cloning fidelity is nearly optimal as long as ω /T≥slant 10, a common parameter for modest-sized black holes.
NASA Astrophysics Data System (ADS)
Zhang, Jicai; Shi, Deheng; Xing, Wei; Sun, Jinfeng; Zhu, Zunlue
2017-11-01
This paper investigates the spectroscopic parameters and transition probabilities of 25 low-lying states, which come from the first five dissociation channels of AlC+ cation. The potential energy curves are calculated with the complete active space self-consistent field method, which is followed by the valence internally contracted multireference configuration interaction approach with Davidson correction. Of these 25 states, only the 35Σ-state is repulsive; the c1Σ+, f1Π, and 15Π states have the double well; the first well of c1Σ+ state and the second well of 15Π state are very weakly bound; the first well of c1Σ+ state has no vibrational levels; the 25Π state and the double well of f1Π state have only several vibrational states; the B3Σ-, E3Σ+, D3Π, 15Σ+, 25Σ-, and 15Π states are inverted when the spin-orbit coupling effect is included. The avoided crossings exist between the B3Σ- and 33Σ- states, the c1Σ+ and d1Σ+ states, the f1Π and 31Π states, the 15Π and 25Π states, as well as the 25Π and 35Π states. Core-valence correlation and scalar relativistic corrections are considered. The extrapolation of potential energies to the complete basis set limit is done. The spectroscopic parameters and vibrational levels are determined for all the Λ-S and Ω bound states. The transition dipole moments are calculated. Franck-Condon factors of a great number of electronic transitions are evaluated. On the whole, the spin-orbit coupling effect on the spectroscopic parameters and vibrational levels is small except for very few states. The results determined in this paper could provide some powerful guidelines to observe these states in a spectroscopy experiment.
Deribe, Kebede; Cano, Jorge; Newport, Melanie J.; Golding, Nick; Pullan, Rachel L.; Sime, Heven; Gebretsadik, Abeba; Assefa, Ashenafi; Kebede, Amha; Hailu, Asrat; Rebollo, Maria P.; Shafi, Oumer; Bockarie, Moses J.; Aseffa, Abraham; Hay, Simon I.; Reithinger, Richard; Enquselassie, Fikre; Davey, Gail; Brooker, Simon J.
2015-01-01
Background Ethiopia is assumed to have the highest burden of podoconiosis globally, but the geographical distribution and environmental limits and correlates are yet to be fully investigated. In this paper we use data from a nationwide survey to address these issues. Methodology Our analyses are based on data arising from the integrated mapping of podoconiosis and lymphatic filariasis (LF) conducted in 2013, supplemented by data from an earlier mapping of LF in western Ethiopia in 2008–2010. The integrated mapping used woreda (district) health offices’ reports of podoconiosis and LF to guide selection of survey sites. A suite of environmental and climatic data and boosted regression tree (BRT) modelling was used to investigate environmental limits and predict the probability of podoconiosis occurrence. Principal Findings Data were available for 141,238 individuals from 1,442 communities in 775 districts from all nine regional states and two city administrations of Ethiopia. In 41.9% of surveyed districts no cases of podoconiosis were identified, with all districts in Affar, Dire Dawa, Somali and Gambella regional states lacking the disease. The disease was most common, with lymphoedema positivity rate exceeding 5%, in the central highlands of Ethiopia, in Amhara, Oromia and Southern Nations, Nationalities and Peoples regional states. BRT modelling indicated that the probability of podoconiosis occurrence increased with increasing altitude, precipitation and silt fraction of soil and decreased with population density and clay content. Based on the BRT model, we estimate that in 2010, 34.9 (95% confidence interval [CI]: 20.2–51.7) million people (i.e. 43.8%; 95% CI: 25.3–64.8% of Ethiopia’s national population) lived in areas environmentally suitable for the occurrence of podoconiosis. Conclusions Podoconiosis is more widespread in Ethiopia than previously estimated, but occurs in distinct geographical regions that are tied to identifiable environmental factors. The resultant maps can be used to guide programme planning and implementation and estimate disease burden in Ethiopia. This work provides a framework with which the geographical limits of podoconiosis could be delineated at a continental scale. PMID:26222887
Ma, Chihua; Luciani, Timothy; Terebus, Anna; Liang, Jie; Marai, G Elisabeta
2017-02-15
Visualizing the complex probability landscape of stochastic gene regulatory networks can further biologists' understanding of phenotypic behavior associated with specific genes. We present PRODIGEN (PRObability DIstribution of GEne Networks), a web-based visual analysis tool for the systematic exploration of probability distributions over simulation time and state space in such networks. PRODIGEN was designed in collaboration with bioinformaticians who research stochastic gene networks. The analysis tool combines in a novel way existing, expanded, and new visual encodings to capture the time-varying characteristics of probability distributions: spaghetti plots over one dimensional projection, heatmaps of distributions over 2D projections, enhanced with overlaid time curves to display temporal changes, and novel individual glyphs of state information corresponding to particular peaks. We demonstrate the effectiveness of the tool through two case studies on the computed probabilistic landscape of a gene regulatory network and of a toggle-switch network. Domain expert feedback indicates that our visual approach can help biologists: 1) visualize probabilities of stable states, 2) explore the temporal probability distributions, and 3) discover small peaks in the probability landscape that have potential relation to specific diseases.
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa; ...
2016-09-07
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Cameron S.; Zhang, Hongbin; Kucukboyaci, Vefa
VERA-CS (Virtual Environment for Reactor Applications, Core Simulator) is a coupled neutron transport and thermal-hydraulics subchannel code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). VERA-CS was used to simulate a typical pressurized water reactor (PWR) full core response with 17x17 fuel assemblies for a main steam line break (MSLB) accident scenario with the most reactive rod cluster control assembly stuck out of the core. The accident scenario was initiated at the hot zero power (HZP) at the end of the first fuel cycle with return to power state points that were determined by amore » system analysis code and the most limiting state point was chosen for core analysis. The best estimate plus uncertainty (BEPU) analysis method was applied using Wilks’ nonparametric statistical approach. In this way, 59 full core simulations were performed to provide the minimum departure from nucleate boiling ratio (MDNBR) at the 95/95 (95% probability with 95% confidence level) tolerance limit. The results show that this typical PWR core remains within MDNBR safety limits for the MSLB accident.« less
Estimation of the limit of detection using information theory measures.
Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago
2014-01-31
Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.
Exact transition probabilities in a 6-state Landau–Zener system with path interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinitsyn, Nikolai A.
2015-04-23
In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.
Annular wave packets at Dirac points in graphene and their probability-density oscillation.
Luo, Ji; Valencia, Daniel; Lu, Junqiang
2011-12-14
Wave packets in graphene whose central wave vector is at Dirac points are investigated by numerical calculations. Starting from an initial Gaussian function, these wave packets form into annular peaks that propagate to all directions like ripple-rings on water surface. At the beginning, electronic probability alternates between the central peak and the ripple-rings and transient oscillation occurs at the center. As time increases, the ripple-rings propagate at the fixed Fermi speed, and their widths remain unchanged. The axial symmetry of the energy dispersion leads to the circular symmetry of the wave packets. The fixed speed and widths, however, are attributed to the linearity of the energy dispersion. Interference between states that, respectively, belong to two branches of the energy dispersion leads to multiple ripple-rings and the probability-density oscillation. In a magnetic field, annular wave packets become confined and no longer propagate to infinity. If the initial Gaussian width differs greatly from the magnetic length, expanding and shrinking ripple-rings form and disappear alternatively in a limited spread, and the wave packet resumes the Gaussian form frequently. The probability thus oscillates persistently between the central peak and the ripple-rings. If the initial Gaussian width is close to the magnetic length, the wave packet retains the Gaussian form and its height and width oscillate with a period determined by the first Landau energy. The wave-packet evolution is determined jointly by the initial state and the magnetic field, through the electronic structure of graphene in a magnetic field. © 2011 American Institute of Physics
The cooling of confined ions driven by laser beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyna, L.G.; Sobehart, J.R.
1993-07-01
We finalize the dynamics of confined ions driven by a quantized radiation field. The ions can absorb photons from an incident laser beam and relax back to the ground state by either induced emissions or spontaneous emissions. Here we assume that the absorption of photons is immediately followed by spontaneous emissions, resulting in single-level ions perturbed by the exchange of momentum with the radiation field. The probability distribution of the ions is calculated using singular expansions in the low noise asymptotic limit. The present calculations reproduce the quantum results in the limit of heavy particles in static traps, and themore » classical results of ions in radio-frequency confining wells.« less
Wavelength-dependence of double optical gating for attosecond pulse generation
NASA Astrophysics Data System (ADS)
Tian, Jia; Li, Min; Yu, Ji-Zhou; Deng, Yong-Kai; Liu, Yun-Quan
2014-10-01
Both polarization gating (PG) and double optical gating (DOG) are productive methods to generate single attosecond (as) pulses. In this paper, considering the ground-state depletion effect, we investigate the wavelength-dependence of the DOG method in order to optimize the generation of single attosecond pulses for the future application. By calculating the ionization probabilities of the leading edge of the pulse at different driving laser wavelengths, we obtain the upper limit of duration for the driving laser pulse for the DOG setup. We find that the upper limit duration increases with the increase of laser wavelength. We further describe the technical method of choosing and calculating the thickness values of optical components for the DOG setup.
A triple point in 3-level systems
NASA Astrophysics Data System (ADS)
Nahmad-Achar, E.; Cordero, S.; López-Peña, R.; Castaños, O.
2014-11-01
The energy spectrum of a 3-level atomic system in the Ξ-configuration is studied. This configuration presents a triple point independently of the number of atoms, which remains in the thermodynamic limit. This means that in a vicinity of this point any quantum fluctuation will drastically change the composition of the ground state of the system. We study the expectation values of the atomic population of each level, the number of photons, and the probability distribution of photons at the triple point.
Reliability of Long-Term Wave Conditions Predicted with Data Sets of Short Duration
1985-03-01
the validity and reliability of predicted probable wave heights obtained from data of limited duration. BACKGROUND: The basic steps listed by...interest to perform the analysis outlined in steps 2 to 5, the prediction would only be reliable for up to a 3year return period. For a 5-year data set...for long-term hindcast data . The data retrieval and analysis program known as the Sea State Engineering Analysis System (SEAS) makes handling of the
Dirac points, spinons and spin liquid in twisted bilayer graphene
NASA Astrophysics Data System (ADS)
Irkhin, V. Yu.; Skryabin, Yu. N.
2018-05-01
Twisted bilayer graphene is an excellent example of highly correlated system demonstrating a nearly flat electron band, the Mott transition and probably a spin liquid state. Besides the one-electron picture, analysis of Dirac points is performed in terms of spinon Fermi surface in the limit of strong correlations. Application of gauge field theory to describe deconfined spin liquid phase is treated. Topological quantum transitions, including those from small to large Fermi surface in the presence of van Hove singularities, are discussed.
2015-09-10
Point Barrow or Nuvuk, Alaska is the northernmost point of all territory of the United States. It also marks the limit between the Chukchi Sea to the west, and the Beaufort Sea to the east. Archaeological evidence indicates that Point Barrow was occupied about 500 CE, probably as hunting camps for whales. The image covers an area of 32 by 38 km, was acquired July 29, 2015, and is located at 71.6 degrees north, 156.45 degrees west. http://photojournal.jpl.nasa.gov/catalog/PIA19775
Achieving minimum-error discrimination of an arbitrary set of laser-light pulses
NASA Astrophysics Data System (ADS)
da Silva, Marcus P.; Guha, Saikat; Dutton, Zachary
2013-05-01
Laser light is widely used for communication and sensing applications, so the optimal discrimination of coherent states—the quantum states of light emitted by an ideal laser—has immense practical importance. Due to fundamental limits imposed by quantum mechanics, such discrimination has a finite minimum probability of error. While concrete optical circuits for the optimal discrimination between two coherent states are well known, the generalization to larger sets of coherent states has been challenging. In this paper, we show how to achieve optimal discrimination of any set of coherent states using a resource-efficient quantum computer. Our construction leverages a recent result on discriminating multicopy quantum hypotheses [Blume-Kohout, Croke, and Zwolak, arXiv:1201.6625]. As illustrative examples, we analyze the performance of discriminating a ternary alphabet and show how the quantum circuit of a receiver designed to discriminate a binary alphabet can be reused in discriminating multimode hypotheses. Finally, we show that our result can be used to achieve the quantum limit on the rate of classical information transmission on a lossy optical channel, which is known to exceed the Shannon rate of all conventional optical receivers.
Baseline models of trace elements in major aquifers of the United States
Lee, L.; Helsel, D.
2005-01-01
Trace-element concentrations in baseline samples from a survey of aquifers used as potable-water supplies in the United States are summarized using methods appropriate for data with multiple detection limits. The resulting statistical distribution models are used to develop summary statistics and estimate probabilities of exceeding water-quality standards. The models are based on data from the major aquifer studies of the USGS National Water Quality Assessment (NAWQA) Program. These data were produced with a nationally-consistent sampling and analytical framework specifically designed to determine the quality of the most important potable groundwater resources during the years 1991-2001. The analytical data for all elements surveyed contain values that were below several detection limits. Such datasets are referred to as multiply-censored data. To address this issue, a robust semi-parametric statistical method called regression on order statistics (ROS) is employed. Utilizing the 90th-95th percentile as an arbitrary range for the upper limits of expected baseline concentrations, the models show that baseline concentrations of dissolved Ba and Zn are below 500 ??g/L. For the same percentile range, dissolved As, Cu and Mo concentrations are below 10 ??g/L, and dissolved Ag, Be, Cd, Co, Cr, Ni, Pb, Sb and Se are below 1-5 ??g/L. These models are also used to determine the probabilities that potable ground waters exceed drinking water standards. For dissolved Ba, Cr, Cu, Pb, Ni, Mo and Se, the likelihood of exceeding the US Environmental Protection Agency standards at the well-head is less than 1-1.5%. A notable exception is As, which has approximately a 7% chance of exceeding the maximum contaminant level (10 ??g/L) at the well head.
Invariance of separability probability over reduced states in 4 × 4 bipartite systems
NASA Astrophysics Data System (ADS)
Lovas, Attila; Andai, Attila
2017-07-01
The geometric separability probability of composite quantum systems has been extensively studied in the recent decades. One of the simplest but strikingly difficult problem is to compute the separability probability of qubit-qubit and rebit-rebit quantum states with respect to the Hilbert-Schmidt measure. A lot of numerical simulations confirm the P{rebit - rebit}=\\frac{29}{64} and P{qubit-qubit}=\\frac{8}{33} conjectured probabilities. We provide a rigorous proof for the separability probability in the real case and we give explicit integral formulas for the complex and quaternionic case. Milz and Strunz studied the separability probability with respect to given subsystems. They conjectured that the separability probability of qubit-qubit (and qubit-qutrit) states of the form of ≤ft(\\begin{array}{@{}cc@{}} D1 & C \\ C* & D2 \\end{array}\\right) depends on D=D1+D2 (on single qubit subsystems), moreover it depends only on the Bloch radii (r) of D and it is constant in r. Using the Peres-Horodecki criterion for separability we give a mathematical proof for the \\frac{29}{64} probability and we present an integral formula for the complex case which hopefully will help to prove the \\frac{8}{33} probability, too. We prove Milz and Strunz’s conjecture for rebit-rebit and qubit-qubit states. The case, when the state space is endowed with the volume form generated by the operator monotone function f(x)=\\sqrt{x} is also studied in detail. We show that even in this setting Milz and Strunz’s conjecture holds true and we give an integral formula for separability probability according to this measure.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Unambiguous discrimination between linearly dependent equidistant states with multiple copies
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hai; Ren, Gang
2018-07-01
Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.
Weak limit of the three-state quantum walk on the line
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2014-07-01
We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.
Ludington, S.D.; Cox, D.P.; McCammon, R.B.
1996-01-01
For this assessment, the conterminous United States was divided into 12 regions Adirondack Mountains, Central and Southern Rocky Mountains, Colorado Plateau, East Central, Great Basin, Great Plains, Lake Superior, Northern Appalachians, Northern Rocky Mountains, Pacific Coast, Southern Appalachians, and Southern Basin and Range. The assessment, which was conducted by regional assessment teams of scientists from the USGS, was based on the concepts of permissive tracts and deposit models. Permissive tracts are discrete areas of the United States for which estimates of numbers of undiscovered deposits of a particular deposit type were made. A permissive tract is defined by its geographic boundaries such that the probability of deposits of the type delineated occurring outside the boundary is neglible. Deposit models, which are based on a compilation of worldwide literature and on observation, are sets of data in a convenient form that describe a group of deposits which have similar characteristics and that contain information on the common geologic attributes of the deposits and the environments in which they are found. Within each region, the assessment teams delineated permissive tracts for those deposit models that were judged to be appropriate and, when the amount of information warranted, estimated the number of undiscovered deposits. A total of 46 deposit models were used to assess 236 separate permissive tracts. Estimates of undiscovered deposits were limited to a depth of 1 km beneath the surface of the Earth. The estimates of the number of undiscovered deposits of gold, silver, copper, lead, and zinc were expressed in the form of a probability distribution. Commonly, the number of undiscovered deposits was estimated at the 90th, 50th, and 10th percentiles. A Monte Carlo simulation computer program was used to combine the probability distribution of the number of undiscovered deposits with the grade and tonnage data sets associated with each deposit model to obtain the probability distribution for undiscovered metal.
Villarreal, Miguel L.; van Riper, Charles; Petrakis, Roy E.
2013-01-01
Riparian vegetation provides important wildlife habitat in the Southwestern United States, but limited distributions and spatial complexity often leads to inaccurate representation in maps used to guide conservation. We test the use of data conflation and aggregation on multiple vegetation/land-cover maps to improve the accuracy of habitat models for the threatened western yellow-billed cuckoo (Coccyzus americanus occidentalis). We used species observations (n = 479) from a state-wide survey to develop habitat models from 1) three vegetation/land-cover maps produced at different geographic scales ranging from state to national, and 2) new aggregate maps defined by the spatial agreement of cover types, which were defined as high (agreement = all data sets), moderate (agreement ≥ 2), and low (no agreement required). Model accuracies, predicted habitat locations, and total area of predicted habitat varied considerably, illustrating the effects of input data quality on habitat predictions and resulting potential impacts on conservation planning. Habitat models based on aggregated and conflated data were more accurate and had higher model sensitivity than original vegetation/land-cover, but this accuracy came at the cost of reduced geographic extent of predicted habitat. Using the highest performing models, we assessed cuckoo habitat preference and distribution in Arizona and found that major watersheds containing high-probably habitat are fragmented by a wide swath of low-probability habitat. Focus on riparian restoration in these areas could provide more breeding habitat for the threatened cuckoo, offset potential future habitat losses in adjacent watershed, and increase regional connectivity for other threatened vertebrates that also use riparian corridors.
Measurements and mathematical formalism of quantum mechanics
NASA Astrophysics Data System (ADS)
Slavnov, D. A.
2007-03-01
A scheme for constructing quantum mechanics is given that does not have Hilbert space and linear operators as its basic elements. Instead, a version of algebraic approach is considered. Elements of a noncommutative algebra (observables) and functionals on this algebra (elementary states) associated with results of single measurements are used as primary components of the scheme. On the one hand, it is possible to use within the scheme the formalism of the standard (Kolmogorov) probability theory, and, on the other hand, it is possible to reproduce the mathematical formalism of standard quantum mechanics, and to study the limits of its applicability. A short outline is given of the necessary material from the theory of algebras and probability theory. It is described how the mathematical scheme of the paper agrees with the theory of quantum measurements, and avoids quantum paradoxes.
NASA Astrophysics Data System (ADS)
Akibue, Seiseki; Kato, Go
2018-04-01
For distinguishing quantum states sampled from a fixed ensemble, the gap in bipartite and single-party distinguishability can be interpreted as a nonlocality of the ensemble. In this paper, we consider bipartite state discrimination in a composite system consisting of N subsystems, where each subsystem is shared between two parties and the state of each subsystem is randomly sampled from a particular ensemble comprising the Bell states. We show that the success probability of perfectly identifying the state converges to 1 as N →∞ if the entropy of the probability distribution associated with the ensemble is less than 1, even if the success probability is less than 1 for any finite N . In other words, the nonlocality of the N -fold ensemble asymptotically disappears if the probability distribution associated with each ensemble is concentrated. Furthermore, we show that the disappearance of the nonlocality can be regarded as a remarkable counterexample of a fundamental open question in theoretical computer science, called a parallel repetition conjecture of interactive games with two classically communicating players. Measurements for the discrimination task include a projective measurement of one party represented by stabilizer states, which enable the other party to perfectly distinguish states that are sampled with high probability.
Probability function of breaking-limited surface elevation. [wind generated waves of ocean
NASA Technical Reports Server (NTRS)
Tung, C. C.; Huang, N. E.; Yuan, Y.; Long, S. R.
1989-01-01
The effect of wave breaking on the probability function of surface elevation is examined. The surface elevation limited by wave breaking zeta sub b(t) is first related to the original wave elevation zeta(t) and its second derivative. An approximate, second-order, nonlinear, non-Gaussian model for zeta(t) of arbitrary but moderate bandwidth is presented, and an expression for the probability density function zeta sub b(t) is derived. The results show clearly that the effect of wave breaking on the probability density function of surface elevation is to introduce a secondary hump on the positive side of the probability density function, a phenomenon also observed in wind wave tank experiments.
NASA Astrophysics Data System (ADS)
Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu
2015-06-01
We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.
A nonstationary Markov transition model for computing the relative risk of dementia before death
Yu, Lei; Griffith, William S.; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.
2010-01-01
This paper investigates the long-term behavior of the k-step transition probability matrix for a nonstationary discrete time Markov chain in the context of modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. The authors derive formulas for the following absorption statistics: (1) the relative risk of absorption between competing absorbing states, and (2) the mean and variance of the number of visits among the transient states before absorption. Since absorption is not guaranteed, sufficient conditions are discussed to ensure that the substochastic matrix associated with transitions among transient states converges to zero in limit. Results are illustrated with an application to the Nun Study, a cohort of 678 participants, 75 to 107 years of age, followed longitudinally with up to ten cognitive assessments over a fifteen-year period. PMID:20087848
Dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses.
Zhang, Xiang; Loda, Justin B; Woodall, William H
2017-07-20
For a patient who has survived a surgery, there could be several levels of recovery. Thus, it is reasonable to consider more than two outcomes when monitoring surgical outcome quality. The risk-adjusted cumulative sum (CUSUM) chart based on multiresponses has been developed for monitoring a surgical process with three or more outcomes. However, there is a significant effect of varying risk distributions on the in-control performance of the chart when constant control limits are applied. To overcome this disadvantage, we apply the dynamic probability control limits to the risk-adjusted CUSUM charts for multiresponses. The simulation results demonstrate that the in-control performance of the charts with dynamic probability control limits can be controlled for different patient populations because these limits are determined for each specific sequence of patients. Thus, the use of dynamic probability control limits for risk-adjusted CUSUM charts based on multiresponses allows each chart to be designed for the corresponding patient sequence of a surgeon or a hospital and therefore does not require estimating or monitoring the patients' risk distribution. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Striped bass stocks and concentrations of polychlorinated biphenyls
Fabrizio, Mary C.; Sloan, Ronald J.; O'Brien, John F.
1991-01-01
Harvest restrictions on striped bass Morone saxatilis fisheries in Atlantic coastal states were relaxed in 1990, but consistent, coastwide regulations of the harvest have been difficult to implement because of the mixed-stock nature of the fisheries and the recognized contamination of Hudson River fish by polychlorinated biphenyls (PCBs). We examined PCB concentrations and stock of origin of coastal striped bass to better understand the effects of these two factors on the composition of the harvest. The probability of observing differences in PCB concentration among fish from the Hudson River stock and the 'southern' group (Chesapeake Bay and Roanoke River stocks combined) was investigated with the logit model (a linear model for analysis of categorical data). Although total PCB concentrations were highly variable among fish from the two groups, striped bass classified as Hudson River stock had a significantly greater probability of having PCB concentrations equal to or greater than 2.00 mg/kg than did fish belonging to the southern group for all age- and size-classes examined. There was a significantly greater probability of observing total PCB concentrations equal to or exceeding 2.00 mg/kg in fish that were 5, 6, and 7 or more years old, and this probability increased linearly with age. We observed similar results when we examined the effect of size on total PCB concentration. The minimum-size limit estimated to permit escapement of fish to sustain stock production is 610 mm total length. Unless total PCB concentrations decrease in striped bass, it is likely that many harvestable fish will have concentrations that exceed the tolerance limit set by the U.S. Food and Drug Administration.
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Yadav, Vijayshree; Narayanaswami, Pushpa
2014-12-01
Complementary and alternative medicine (CAM) use in individuals with multiple sclerosis (MS) is common, but its use has been limited by a lack of evidence-based guidance. In March 2014, the American Academy of Neurology published the most comprehensive literature review and evidence-based practice guidelines for CAM use in MS. The guideline author panel reviewed and classified articles according to the American Academy of Neurology therapeutic scheme, and recommendations were linked to the evidence strength. Level A recommendations were found for oral cannabis extract effectiveness in the short term for spasticity-related symptoms and pain and ineffectiveness of ginkgo biloba for cognitive function improvement in MS. Key level B recommendations included: Oral cannabis extract or a synthetic cannabis constituent, tetrahydrocannabinol (THC) is probably ineffective for objective spasticity improvement in the short term; Nabiximols oromucosal cannabinoid spray is probably effective for spasticity symptoms, pain, and urinary frequency, but probably ineffective for objective spasticity outcomes and bladder incontinence; Magnetic therapy is probably effective for fatigue reduction in MS; A low-fat diet with fish oil supplementation is probably ineffective for MS-related relapses, disability, fatigue, magnetic resonance imaging lesions, and quality of life. Several Level C recommendations were made. These included possible effectiveness of gingko biloba for fatigue; possible effectiveness of reflexology for MS-related paresthesias; possible ineffectiveness of the Cari Loder regimen for MS-related disability, symptoms, depression, and fatigue; and bee sting therapy for MS relapses, disability, fatigue, magnetic resonance imaging outcomes, and health-related quality of life. Despite the availability of studies evaluating the effects of oral cannabis in MS, the use of these formulations in United States may be limited due to a lack of standardized, commercial US Food and Drug Administration-regulated preparations. Additionally, significant concern about prominent central nervous system-related adverse effects with cannabis was emphasized in the review. Copyright © 2014 Elsevier HS Journals, Inc. All rights reserved.
Wu, Wei; Wang, Jin
2013-09-28
We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.
The utility of Bayesian predictive probabilities for interim monitoring of clinical trials
Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn
2014-01-01
Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363
Measurement of the electron shake-off in the β-decay of laser-trapped 6He atoms
NASA Astrophysics Data System (ADS)
Hong, Ran; Bagdasarova, Yelena; Garcia, Alejandro; Storm, Derek; Sternberg, Matthew; Swanson, Erik; Wauters, Frederik; Zumwalt, David; Bailey, Kevin; Leredde, Arnaud; Mueller, Peter; O'Connor, Thomas; Flechard, Xavier; Liennard, Etienne; Knecht, Andreas; Naviliat-Cuncic, Oscar
2016-03-01
Electron shake-off is an important process in many high precision nuclear β-decay measurements searching for physics beyond the standard model. 6He being one of the lightest β-decaying isotopes, has a simple atomic structure. Thus, it is well suited for testing calculations of shake-off effects. Shake-off probabilities from the 23S1 and 23P2 initial states of laser trapped 6He matter for the on-going beta-neutrino correlation study at the University of Washington. These probabilities are obtained by analyzing the time-of-flight distribution of the recoil ions detected in coincidence with the beta particles. A β-neutrino correlation independent analysis approach was developed. The measured upper limit of the double shake-off probability is 2 ×10-4 at 90% confidence level. This result is ~100 times lower than the most recent calculation by Schulhoff and Drake. This work is supported by DOE, Office of Nuclear Physics, under Contract Nos. DE-AC02-06CH11357 and DE-FG02-97ER41020.
Local Structure Theory for Cellular Automata.
NASA Astrophysics Data System (ADS)
Gutowitz, Howard Andrew
The local structure theory (LST) is a generalization of the mean field theory for cellular automata (CA). The mean field theory makes the assumption that iterative application of the rule does not introduce correlations between the states of cells in different positions. This assumption allows the derivation of a simple formula for the limit density of each possible state of a cell. The most striking feature of CA is that they may well generate correlations between the states of cells as they evolve. The LST takes the generation of correlation explicitly into account. It thus has the potential to describe statistical characteristics in detail. The basic assumption of the LST is that though correlation may be generated by CA evolution, this correlation decays with distance. This assumption allows the derivation of formulas for the estimation of the probability of large blocks of states in terms of smaller blocks of states. Given the probabilities of blocks of size n, probabilities may be assigned to blocks of arbitrary size such that these probability assignments satisfy the Kolmogorov consistency conditions and hence may be used to define a measure on the set of all possible (infinite) configurations. Measures defined in this way are called finite (or n-) block measures. A function called the scramble operator of order n maps a measure to an approximating n-block measure. The action of a CA on configurations induces an action on measures on the set of all configurations. The scramble operator is combined with the CA map on measure to form the local structure operator (LSO). The LSO of order n maps the set of n-block measures into itself. It is hypothesised that the LSO applied to n-block measures approximates the rule itself on general measures, and does so increasingly well as n increases. The fundamental advantage of the LSO is that its action is explicitly computable from a finite system of rational recursion equations. Empirical study of a number of CA rules demonstrates the potential of the LST to describe the statistical features of CA. The behavior of some simple rules is derived analytically. Other rules have more complex, chaotic behavior. Even for these rules, the LST yields an accurate portrait of both small and large time statistics.
Systematic study of α preformation probability of nuclear isomeric and ground states
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong; Wu, Xi-Jun; Zheng, Bo; Xiang, Dong; Guo, Ping; Li, Xiao-Hua
2017-01-01
In this paper, based on the two-potential approach combining with the isospin dependent nuclear potential, we systematically compare the α preformation probabilities of odd-A nuclei between nuclear isomeric states and ground states. The results indicate that during the process of α particle preforming, the low lying nuclear isomeric states are similar to ground states. Meanwhile, in the framework of single nucleon energy level structure, we find that for nuclei with nucleon number below the magic numbers, the α preformation probabilities of high-spin states seem to be larger than low ones. For nuclei with nucleon number above the magic numbers, the α preformation probabilities of isomeric states are larger than those of ground states. Supported by National Natural Science Foundation of China (11205083), Construct Program of Key Discipline in Hunan Province, Research Foundation of Education Bureau of Hunan Province, China (15A159), Natural Science Foundation of Hunan Province, China (2015JJ3103, 2015JJ2123), Innovation Group of Nuclear and Particle Physics in USC, Hunan Provincial Innovation Foundation for Postgraduate (CX2015B398)
Hsiu Chen, Chen; Wen, Fur-Hsing; Hou, Ming-Mo; Hsieh, Chia-Hsun; Chou, Wen-Chi; Chen, Jen-Shi; Chang, Wen-Cheng; Tang, Siew Tzuh
2017-09-01
Developing accurate prognostic awareness, a cornerstone of preference-based end-of-life (EOL) care decision-making, is a dynamic process involving more prognostic-awareness states than knowing or not knowing. Understanding the transition probabilities and time spent in each prognostic-awareness state can help clinicians identify trigger points for facilitating transitions toward accurate prognostic awareness. We examined transition probabilities in distinct prognostic-awareness states between consecutive time points in 247 cancer patients' last 6 months and estimated the time spent in each state. Prognostic awareness was categorized into four states: (a) unknown and not wanting to know, state 1; (b) unknown but wanting to know, state 2; (c) inaccurate awareness, state 3; and (d) accurate awareness, state 4. Transitional probabilities were examined by multistate Markov modeling. Initially, 59.5% of patients had accurate prognostic awareness, whereas the probabilities of being in states 1-3 were 8.1%, 17.4%, and 15.0%, respectively. Patients' prognostic awareness generally remained unchanged (probabilities of remaining in the same state: 45.5%-92.9%). If prognostic awareness changed, it tended to shift toward higher prognostic-awareness states (probabilities of shifting to state 4 were 23.2%-36.6% for patients initially in states 1-3, followed by probabilities of shifting to state 3 for those in states 1 and 2 [9.8%-10.1%]). Patients were estimated to spend 1.29, 0.42, 0.68, and 3.61 months in states 1-4, respectively, in their last 6 months. Terminally ill cancer patients' prognostic awareness generally remained unchanged, with a tendency to become more aware of their prognosis. Health care professionals should facilitate patients' transitions toward accurate prognostic awareness in a timely manner to promote preference-based EOL decisions. Terminally ill Taiwanese cancer patients' prognostic awareness generally remained stable, with a tendency toward developing higher states of awareness. Health care professionals should appropriately assess patients' readiness for prognostic information and respect patients' reluctance to confront their poor prognosis if they are not ready to know, but sensitively coach them to cultivate their accurate prognostic awareness, provide desired and understandable prognostic information for those who are ready to know, and give direct and honest prognostic information to clarify any misunderstandings for those with inaccurate awareness, thus ensuring that they develop accurate and realistic prognostic knowledge in time to make end-of-life care decisions. © AlphaMed Press 2017.
Ferreira, Renata G; Jerusalinsky, Leandro; Silva, Thiago César Farias; de Souza Fialho, Marcos; de Araújo Roque, Alan; Fernandes, Adalberto; Arruda, Fátima
2009-10-01
Cebus flavius is a recently rediscovered species and a candidate for the 25 most endangered primate species list. It was hypothesized that the distribution of C. flavius was limited to the Atlantic Forest, while the occurrence of C. libidinosus in the Rio Grande do Norte (RN) Caatinga was inferred, given its occurrence in neighboring states. As a result of a survey in ten areas of the RN Caatinga, this paper reports on four Cebus populations, including the first occurrence of C. flavius in the Caatinga, and an expansion of the northwestern limits of distribution for the species. This C. flavius population may be a rare example of a process of geographic distribution retraction, and is probably the most endangered population of this species. New areas of occurrence of C. libidinosus are also described. Tool use sites were observed in association with reports of the presence of both capuchin species.
NASA Astrophysics Data System (ADS)
Zhang, Guannan; Del-Castillo-Negrete, Diego
2017-10-01
Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.
Seasonal fecundity is not related to geographic position ...
AimSixty-five years ago, Theodosius Dobzhansky suggested that individuals of a species face greater challenges from abiotic stressors at high latitudes and from biotic stressors at their low-latitude range edges. This idea has been expanded to the hypothesis that species’ ranges are limited by abiotic and biotic stressors at high and low latitudes, respectively. Support has been found in many systems, but this hypothesis has almost never been tested with demographic data. We present an analysis of fecundity across the breeding range of a species as a test of this hypothesis.Location575 km of tidal marshes in the northeastern United States.MethodsWe monitored saltmarsh sparrow (Ammodramus caudacutus) nests at twenty-three sites from Maine to New Jersey, USA. With data from 840 nests, we calculated daily nest failure probabilities due to competing abiotic (flooding) and biotic (depredation) stressors.ResultsWe observed that abiotic stress (nest flooding probability) was greater than biotic stress (nest depredation probability) at the high-latitude range edge of saltmarsh sparrows, consistent with Dobzhansky’s hypothesis. Similarly, biotic stress decreased with increasing latitude throughout the range, whereas abiotic stress was not predicted by latitude alone. Instead, nest flooding probability was best predicted by date, maximum high tide, and extremity of rare flooding events.Main conclusionsOur results provide support for Dobzhansky’s hypothesis across th
Hu, Wen
2017-06-01
In November 2010 and October 2013, Utah increased speed limits on sections of rural interstates from 75 to 80mph. Effects on vehicle speeds and speed variance were examined. Speeds were measured in May 2010 and May 2014 within the new 80mph zones, and at a nearby spillover site and at more distant control sites where speed limits remained 75mph. Log-linear regression models estimated percentage changes in speed variance and mean speeds for passenger vehicles and large trucks associated with the speed limit increase. Logistic regression models estimated effects on the probability of passenger vehicles exceeding 80, 85, or 90mph and large trucks exceeding 80mph. Within the 80mph zones and at the spillover location in 2014, mean passenger vehicle speeds were significantly higher (4.1% and 3.5%, respectively), as were the probabilities that passenger vehicles exceeded 80mph (122.3% and 88.5%, respectively), than would have been expected without the speed limit increase. Probabilities that passenger vehicles exceeded 85 and 90mph were non-significantly higher than expected within the 80mph zones. For large trucks, the mean speed and probability of exceeding 80mph were higher than expected within the 80mph zones. Only the increase in mean speed was significant. Raising the speed limit was associated with non-significant increases in speed variance. The study adds to the wealth of evidence that increasing speed limits leads to higher travel speeds and an increased probability of exceeding the new speed limit. Results moreover contradict the claim that increasing speed limits reduces speed variance. Although the estimated increases in mean vehicle speeds may appear modest, prior research suggests such increases would be associated with substantial increases in fatal or injury crashes. This should be considered by lawmakers considering increasing speed limits. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.
Ćwikliński, Piotr; Studziński, Michał; Horodecki, Michał; Oppenheim, Jonathan
2015-11-20
The second law of thermodynamics places a limitation into which states a system can evolve into. For systems in contact with a heat bath, it can be combined with the law of energy conservation, and it says that a system can only evolve into another if the free energy goes down. Recently, it's been shown that there are actually many second laws, and that it is only for large macroscopic systems that they all become equivalent to the ordinary one. These additional second laws also hold for quantum systems, and are, in fact, often more relevant in this regime. They place a restriction on how the probabilities of energy levels can evolve. Here, we consider additional restrictions on how the coherences between energy levels can evolve. Coherences can only go down, and we provide a set of restrictions which limit the extent to which they can be maintained. We find that coherences over energy levels must decay at rates that are suitably adapted to the transition rates between energy levels. We show that the limitations are matched in the case of a single qubit, in which case we obtain the full characterization of state-to-state transformations. For higher dimensions, we conjecture that more severe constraints exist. We also introduce a new class of thermodynamical operations which allow for greater manipulation of coherences and study its power with respect to a class of operations known as thermal operations.
Counterfactual quantum computation through quantum interrogation
NASA Astrophysics Data System (ADS)
Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.
2006-02-01
The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.
Who cares? A comparison of informal and formal care provision in Spain, England and the USA.
Solé-Auró, Aïda; Crimmins, Eileen M
2014-03-01
This paper investigates the prevalence of incapacity in performing daily activities and the associations between household composition and availability of family members and receipt of care among older adults with functioning problems in Spain, England and the United States of America (USA). We examine how living arrangements, marital status, child availability, limitations in functioning ability, age and gender affect the probability of receiving formal care and informal care from household members and from others in three countries with different family structures, living arrangements and policies supporting care of the incapacitated. Data sources include the 2006 Survey of Health, Ageing and Retirement in Europe for Spain, the third wave of the English Longitudinal Study of Ageing (2006), and the eighth wave of the USA Health and Retirement Study (2006). Logistic and multinomial logistic regressions are used to estimate the probability of receiving care and the sources of care among persons age 50 and older. The percentage of people with functional limitations receiving care is higher in Spain. More care comes from outside the household in the USA and England than in Spain. The use of formal care among the incapacitated is lowest in the USA and highest in Spain.
Exact one-sided confidence limits for the difference between two correlated proportions.
Lloyd, Chris J; Moldovan, Max V
2007-08-15
We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.
Empirical estimation of the conditional probability of natech events within the United States.
Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea
2011-06-01
Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry. © 2011 Society for Risk Analysis.
Exact solutions for the selection-mutation equilibrium in the Crow-Kimura evolutionary model.
Semenov, Yuri S; Novozhilov, Artem S
2015-08-01
We reformulate the eigenvalue problem for the selection-mutation equilibrium distribution in the case of a haploid asexually reproduced population in the form of an equation for an unknown probability generating function of this distribution. The special form of this equation in the infinite sequence limit allows us to obtain analytically the steady state distributions for a number of particular cases of the fitness landscape. The general approach is illustrated by examples; theoretical findings are compared with numerical calculations. Copyright © 2015. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Iles, P. A.; Mclennan, H.
1975-01-01
Limitations in both space and terrestial markets for solar cells are described. Based on knowledge of the state-of-the-art, six cell options are discussed; as a result of this discussion, the three most promising options (involving high, medium and low efficiency cells respectively) were selected and analyzed for their probable costs. The results showed that all three cell options gave promise of costs below $10 per watt in the near future. Before further cost reductions can be achieved, more R and D work is required; suggestions for suitable programs are given.
Experimental preparation and verification of quantum money
NASA Astrophysics Data System (ADS)
Guan, Jian-Yu; Arrazola, Juan Miguel; Amiri, Ryan; Zhang, Weijun; Li, Hao; You, Lixing; Wang, Zhen; Zhang, Qiang; Pan, Jian-Wei
2018-03-01
A quantum money scheme enables a trusted bank to provide untrusted users with verifiable quantum banknotes that cannot be forged. In this work, we report a proof-of-principle experimental demonstration of the preparation and verification of unforgeable quantum banknotes. We employ a security analysis that takes experimental imperfections fully into account. We measure a total of 3.6 ×106 states in one verification round, limiting the forging probability to 10-7 based on the security analysis. Our results demonstrate the feasibility of preparing and verifying quantum banknotes using currently available experimental techniques.
Contextual Advantage for State Discrimination
NASA Astrophysics Data System (ADS)
Schmid, David; Spekkens, Robert W.
2018-02-01
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
Emergence of low noise frustrated states in E/I balanced neural networks.
Recio, I; Torres, J J
2016-12-01
We study emerging phenomena in binary neural networks where, with a probability c synaptic intensities are chosen according with a Hebbian prescription, and with probability (1-c) there is an extra random contribution to synaptic weights. This new term, randomly taken from a Gaussian bimodal distribution, balances the synaptic population in the network so that one has 80%-20% relation in E/I population ratio, mimicking the balance observed in mammals cortex. For some regions of the relevant parameters, our system depicts standard memory (at low temperature) and non-memory attractors (at high temperature). However, as c decreases and the level of the underlying noise also decreases below a certain temperature T t , a kind of memory-frustrated state, which resembles spin-glass behavior, sharply emerges. Contrary to what occurs in Hopfield-like neural networks, the frustrated state appears here even in the limit of the loading parameter α→0. Moreover, we observed that the frustrated state in fact corresponds to two states of non-vanishing activity uncorrelated with stored memories, associated, respectively, to a high activity or Up state and to a low activity or Down state. Using a linear stability analysis, we found regions in the space of relevant parameters for locally stable steady states and demonstrated that frustrated states coexist with memory attractors below T t . Then, multistability between memory and frustrated states is present for relatively small c, and metastability of memory attractors can emerge as c decreases even more. We studied our system using standard mean-field techniques and with Monte Carlo simulations, obtaining a perfect agreement between theory and simulations. Our study can be useful to explain the role of synapse heterogeneity on the emergence of stable Up and Down states not associated to memory attractors, and to explore the conditions to induce transitions among them, as in sleep-wake transitions. Copyright © 2016 Elsevier Ltd. All rights reserved.
What Can Quantum Optics Say about Computational Complexity Theory?
NASA Astrophysics Data System (ADS)
Rahimi-Keshari, Saleh; Lund, Austin P.; Ralph, Timothy C.
2015-02-01
Considering the problem of sampling from the output photon-counting probability distribution of a linear-optical network for input Gaussian states, we obtain results that are of interest from both quantum theory and the computational complexity theory point of view. We derive a general formula for calculating the output probabilities, and by considering input thermal states, we show that the output probabilities are proportional to permanents of positive-semidefinite Hermitian matrices. It is believed that approximating permanents of complex matrices in general is a #P-hard problem. However, we show that these permanents can be approximated with an algorithm in the BPPNP complexity class, as there exists an efficient classical algorithm for sampling from the output probability distribution. We further consider input squeezed-vacuum states and discuss the complexity of sampling from the probability distribution at the output.
Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states
NASA Astrophysics Data System (ADS)
de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.
2015-12-01
Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.
Mean-Potential Law in Evolutionary Games
NASA Astrophysics Data System (ADS)
Nałecz-Jawecki, Paweł; Miekisz, Jacek
2018-01-01
The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.
Recovery time in quantum dynamics of wave packets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strekalov, M. L., E-mail: strekalov@kinetics.nsc.ru
2017-01-15
A wave packet formed by a linear superposition of bound states with an arbitrary energy spectrum returns arbitrarily close to the initial state after a quite long time. A method in which quantum recovery times are calculated exactly is developed. In particular, an exact analytic expression is derived for the recovery time in the limiting case of a two-level system. In the general case, the reciprocal recovery time is proportional to the Gauss distribution that depends on two parameters (mean value and variance of the return probability). The dependence of the recovery time on the mean excitation level of themore » system is established. The recovery time is the longest for the maximal excitation level.« less
The Sznajd model with limited persuasion: competition between high-reputation and hesitant agents
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Murilo Castro de Oliveira, Paulo
2011-11-01
In this work we study a modified version of the two-dimensional Sznajd sociophysics model. In particular, we consider the effects of agents' reputations in the persuasion rules. In other words, a high-reputation group with a common opinion may convince its neighbors with probability p, which induces an increase of the group's reputation. On the other hand, there is always a probability q = 1 - p of the neighbors keeping their opinions, which induces a decrease of the group's reputation. These rules describe a competition between groups with high-reputation and hesitant agents, which makes the full-consensus states (with all spins pointing in one direction) more difficult to reach. As consequences, the usual phase transition does not occur for p < pc ~ 0.69 and the system presents realistic democracy-like situations, where the majority of spins are aligned in a certain direction, for a wide range of parameters.
NASA Astrophysics Data System (ADS)
Lépinoux, J.; Sigli, C.
2018-01-01
In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.
Min-entropy uncertainty relation for finite-size cryptography
NASA Astrophysics Data System (ADS)
Ng, Nelly Huei Ying; Berta, Mario; Wehner, Stephanie
2012-10-01
Apart from their foundational significance, entropic uncertainty relations play a central role in proving the security of quantum cryptographic protocols. Of particular interest are therefore relations in terms of the smooth min-entropy for Bennett-Brassard 1984 (BB84) and six-state encodings. The smooth min-entropy Hminɛ(X/B) quantifies the negative logarithm of the probability for an attacker B to guess X, except with a small failure probability ɛ. Previously, strong uncertainty relations were obtained which are valid in the limit of large block lengths. Here, we prove an alternative uncertainty relation in terms of the smooth min-entropy that is only marginally less strong but has the crucial property that it can be applied to rather small block lengths. This paves the way for a practical implementation of many cryptographic protocols. As part of our proof we show tight uncertainty relations for a family of Rényi entropies that may be of independent interest.
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams
NASA Technical Reports Server (NTRS)
Steely, Sidney L.
1993-01-01
The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.
Relation between minimum-error discrimination and optimum unambiguous discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun
2010-09-15
In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less
Pomes, M.L.; Thurman, E.M.; Aga, D.S.; Goolsby, D.A.
1998-01-01
Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false- positive rate of 11.8% and the chloroacetanilide ELISA yielded a false- negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false-positive rate of 11.8% and the chloroacetanilide ELISA yielded a false-negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.
True detection limits in an experimental linearly heteroscedastic system.. Part 2
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
The von Neumann model of measurement in quantum mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mello, Pier A.
2014-01-08
We describe how to obtain information on a quantum-mechanical system by coupling it to a probe and detecting some property of the latter, using a model introduced by von Neumann, which describes the interaction of the system proper with the probe in a dynamical way. We first discuss single measurements, where the system proper is coupled to one probe with arbitrary coupling strength. The goal is to obtain information on the system detecting the probe position. We find the reduced density operator of the system, and show how Lüders rule emerges as the limiting case of strong coupling. The vonmore » Neumann model is then generalized to two probes that interact successively with the system proper. Now we find information on the system by detecting the position-position and momentum-position correlations of the two probes. The so-called 'Wigner's formula' emerges in the strong-coupling limit, while 'Kirkwood's quasi-probability distribution' is found as the weak-coupling limit of the above formalism. We show that successive measurements can be used to develop a state-reconstruction scheme. Finally, we find a generalized transform of the state and the observables based on the notion of successive measurements.« less
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
Theory of Stochastic Laplacian Growth
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2017-07-01
We generalize the diffusion-limited aggregation by issuing many randomly-walking particles, which stick to a cluster at the discrete time unit providing its growth. Using simple combinatorial arguments we determine probabilities of different growth scenarios and prove that the most probable evolution is governed by the deterministic Laplacian growth equation. A potential-theoretical analysis of the growth probabilities reveals connections with the tau-function of the integrable dispersionless limit of the two-dimensional Toda hierarchy, normal matrix ensembles, and the two-dimensional Dyson gas confined in a non-uniform magnetic field. We introduce the time-dependent Hamiltonian, which generates transitions between different classes of equivalence of closed curves, and prove the Hamiltonian structure of the interface dynamics. Finally, we propose a relation between probabilities of growth scenarios and the semi-classical limit of certain correlation functions of "light" exponential operators in the Liouville conformal field theory on a pseudosphere.
Microscopic observation of magnon bound states and their dynamics.
Fukuhara, Takeshi; Schauß, Peter; Endres, Manuel; Hild, Sebastian; Cheneau, Marc; Bloch, Immanuel; Gross, Christian
2013-10-03
The existence of bound states of elementary spin waves (magnons) in one-dimensional quantum magnets was predicted almost 80 years ago. Identifying signatures of magnon bound states has so far remained the subject of intense theoretical research, and their detection has proved challenging for experiments. Ultracold atoms offer an ideal setting in which to find such bound states by tracking the spin dynamics with single-spin and single-site resolution following a local excitation. Here we use in situ correlation measurements to observe two-magnon bound states directly in a one-dimensional Heisenberg spin chain comprising ultracold bosonic atoms in an optical lattice. We observe the quantum dynamics of free and bound magnon states through time-resolved measurements of two spin impurities. The increased effective mass of the compound magnon state results in slower spin dynamics as compared to single-magnon excitations. We also determine the decay time of bound magnons, which is probably limited by scattering on thermal fluctuations in the system. Our results provide a new way of studying fundamental properties of quantum magnets and, more generally, properties of interacting impurities in quantum many-body systems.
Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.
Chevallier, Maguelonne; Krauth, Werner
2007-11-01
We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.
Teleportation of Three-Qubit State via Six-qubit Cluster State
NASA Astrophysics Data System (ADS)
Yu, Li-zhi; Sun, Shao-xin
2015-05-01
A scheme of probabilistic teleportation was proposed. In this scheme, we took a six-qubit nonmaximally cluster state as the quantum channel to teleport an unknown three-qubit entangled state. Based on Bob's three times Bell state measurement (BSM) results, the receiver Bob can by introducing an auxiliary particle and the appropriate transformation to reconstruct the initial state with a certain probability. We found that, the successful transmission probability depend on the absolute value of coefficients of two of six particle cluster state minimum.
Optimal causal inference: estimating stored information and approximating causal architecture.
Still, Susanne; Crutchfield, James P; Ellison, Christopher J
2010-09-01
We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
First Detected Arrival of a Quantum Walker on an Infinite Line
NASA Astrophysics Data System (ADS)
Thiel, Felix; Barkai, Eli; Kessler, David A.
2018-01-01
The first detection of a quantum particle on a graph is shown to depend sensitively on the distance ξ between the detector and initial location of the particle, and on the sampling time τ . Here, we use the recently introduced quantum renewal equation to investigate the statistics of first detection on an infinite line, using a tight-binding lattice Hamiltonian with nearest-neighbor hops. Universal features of the first detection probability are uncovered and simple limiting cases are analyzed. These include the large ξ limit, the small τ limit, and the power law decay with the attempt number of the detection probability over which quantum oscillations are superimposed. For large ξ the first detection probability assumes a scaling form and when the sampling time is equal to the inverse of the energy band width nonanalytical behaviors arise, accompanied by a transition in the statistics. The maximum total detection probability is found to occur for τ close to this transition point. When the initial location of the particle is far from the detection node we find that the total detection probability attains a finite value that is distance independent.
Maximum Relative Entropy of Coherence: An Operational Coherence Measure.
Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde
2017-10-13
The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.
Distribution of Elevated Nitrate Concentrations in Ground Water in Washington State
Frans, Lonna
2008-01-01
More than 60 percent of the population of Washington State uses ground water for their drinking and cooking needs. Nitrate concentrations in ground water are elevated in parts of the State as a result of various land-use practices, including fertilizer application, dairy operations and ranching, and septic-system use. Shallow wells generally are more vulnerable to nitrate contamination than deeper wells (Williamson and others, 1998; Ebbert and others, 2000). In order to protect public health, the Washington State Department of Health requires that public water systems regularly measure nitrate in their wells. Public water systems serving more than 25 people collect water samples at least annually; systems serving from 2 to 14 people collect water samples at least every 3 years. Private well owners serving one residence may be required to sample when the well is first drilled, but are unregulated after that. As a result, limited information is available to citizens and public health officials about potential exposure to elevated nitrate concentrations for people whose primary drinking-water sources are private wells. The U.S. Geological Survey and Washington State Department of Health collaborated to examine water-quality data from public water systems and develop models that calculate the probability of detecting elevated nitrate concentrations in ground water. Maps were then developed to estimate ground water vulnerability to nitrate in areas where limited data are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadyrbekov, M. S., E-mail: nodirbekov@inp.uz; Bozarov, O. A.
Reduced probabilities for intra- and interband E2 transitions in excited collective states of even–even lanthanide and actinide nuclei are analyzed on the basis of a model that admits an arbitrary triaxiality. They are studied in detail in the energy spectra of {sup 154}Sm, {sup 156}Gd, {sup 158}Dy, {sup 162,164}Er, {sup 230,232}Th, and {sup 232,234,236,238}U even–even nuclei. Theoretical and experimental values of the reduced probabilities for the respective E2 transitions are compared. This comparison shows good agreement for all states, including high-spin ones. The ratios of the reduced probabilities for the E2 transitions in question are compared with results following frommore » the Alaga rules. These comparisons make it possible to assess the sensitivity of the probabilities being considered to the presence of quadrupole deformations.« less
Use of uninformative priors to initialize state estimation for dynamical systems
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.
2017-10-01
The admissible region must be expressed probabilistically in order to be used in Bayesian estimation schemes. When treated as a probability density function (PDF), a uniform admissible region can be shown to have non-uniform probability density after a transformation. An alternative approach can be used to express the admissible region probabilistically according to the Principle of Transformation Groups. This paper uses a fundamental multivariate probability transformation theorem to show that regardless of which state space an admissible region is expressed in, the probability density must remain the same under the Principle of Transformation Groups. The admissible region can be shown to be analogous to an uninformative prior with a probability density that remains constant under reparameterization. This paper introduces requirements on how these uninformative priors may be transformed and used for state estimation and the difference in results when initializing an estimation scheme via a traditional transformation versus the alternative approach.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Intelligent Microscopes: Recent And Near-Future Advances
NASA Astrophysics Data System (ADS)
Prewitt, Judith M. S.
1980-02-01
Robert Hooke conjectured about fluid circulation in plants as well as in animals in Micrographia in a passage that is equally important as a commentary on the dependence, not of technology on science, but of science on technology: It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with better Microscopes, may in time detect.This paper, written in the form of a scientific poem, reviews the current and near- future state-of-the-art of automated intelligent microscopes based on computer science and technology. The basic concepts of computer intelligence for cytology and histology are presented and elaborated. Limitations of commercial devices and research proto- types are examined (Dx), and remedies are suggested (Rx). The course of action pro- posed and being undertaken constitutes an original contribution toward advancing the state-of-the-science, in the hope of advancing the state-of-the-art of medicine.With rapid, contemporary advances in both science and technology, it may now be appropriate to modify Hooke's passage:It seems very probable that Nature has ... very many appropriated instruments and contrivances, whereby to bring her designs and end to pass, which 'tis not improbable but that some diligent observer, if helped with Intelligent Microscopes, may in time detect.
Realistic Covariance Prediction for the Earth Science Constellation
NASA Technical Reports Server (NTRS)
Duncan, Matthew; Long, Anne
2006-01-01
Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.
Canine Visceral Leishmaniasis, United States and Canada, 2000–2003
Duprey, Zandra H.; Steurer, Francis J.; Rooney, Jane A.; Kirchhoff, Louis V.; Jackson, Joan E.; Rowton, Edgar D.
2006-01-01
Visceral leishmaniasis, caused by protozoa of the genus Leishmania donovani complex, is a vectorborne zoonotic infection that infects humans, dogs, and other mammals. In 2000, this infection was implicated as causing high rates of illness and death among foxhounds in a kennel in New York. A serosurvey of >12,000 foxhounds and other canids and 185 persons in 35 states and 4 Canadian provinces was performed to determine geographic extent, prevalence, host range, and modes of transmission within foxhounds, other dogs, and wild canids and to assess possible infections in humans. Foxhounds infected with Leishmania spp. were found in 18 states and 2 Canadian provinces. No evidence of infection was found in humans. The infection in North America appears to be widespread in foxhounds and limited to dog-to-dog mechanisms of transmission; however, if the organism becomes adapted for vector transmission by indigenous phlebotomines, the probability of human exposure will be greatly increased. PMID:16704782
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
NASA Technical Reports Server (NTRS)
Soneira, R. M.; Bahcall, J. N.
1981-01-01
Probabilities are calculated for acquiring suitable guide stars (GS) with the fine guidance system (FGS) of the space telescope. A number of the considerations and techniques described are also relevant for other space astronomy missions. The constraints of the FGS are reviewed. The available data on bright star densities are summarized and a previous error in the literature is corrected. Separate analytic and Monte Carlo calculations of the probabilities are described. A simulation of space telescope pointing is carried out using the Weistrop north galactic pole catalog of bright stars. Sufficient information is presented so that the probabilities of acquisition can be estimated as a function of position in the sky. The probability of acquiring suitable guide stars is greatly increased if the FGS can allow an appreciable difference between the (bright) primary GS limiting magnitude and the (fainter) secondary GS limiting magnitude.
Bae, Sung-Heui; Brewer, Carol S; Kovner, Christine T
2012-01-01
Nurse overtime has been used to handle normal variations in patient census and to control chronic understaffing. By 2010, 16 states had regulations to limit nurse overtime. We examined mandatory overtime regulations and their association with mandatory and voluntary overtime and total hours worked by newly licensed registered nurses (NLRNs). For this secondary data analysis, we used a panel survey of NLRNs; the final dataset consisted of 1,706 NLRNs. Nurses working in states that instituted overtime regulations after 2003 or in states that restricted any type of mandatory overtime had a lower probability of experiencing mandatory overtime than those nurses working in states without regulations. Nurses who worked in states with mandatory overtime regulations reported fewer total hours worked per week. The findings of this study provided insight into how mandatory overtime regulations were related to nurse mandatory and voluntary overtime and the total number of hours worked. Future research should investigate institutions' compliance with regulations and the impact of regulations on nurse and patient outcomes. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
George, Russ
2005-03-01
Nano-lattices of deuterium loving metals exhibit coherent behavior by populations of deuterons (d's) occupying a Bloch state. Therein, coherent d-overlap occurs wherein the Bloch condition reduces the Coulomb barrier.Overlap of dd pairs provides a high probability fusion will/must occur. SEM photo evidence showing fusion events is now revealed by laboratories that load or flux d into metal nano-domains. Solid-state dd fusion creates an excited ^4He nucleus entangled in the large coherent population of d's.This contrasts with plasma dd fusion in collision space where an isolated excited ^4He nucleus seeks the ground state via fast particle emission. In momentum limited solid state fusion,fast particle emission is effectively forbidden.Photographed nano-explosive events are beyond the scope of chemistry. Corroboration of the nuclear nature derives from photographic observation of similar events on spontaneous fission, e.g. Cf. We present predictive theory, heat production, and helium isotope data showing reproducible e14 to e16 solid-state fusion reactions.
Moran, Michael J.; Zogorski, John S.; Squillace, Paul J.
2004-01-01
The occurrence and implications of methyl tert-butyl ether (MTBE) and gasoline hydrocarbons were examined in three surveys of water quality conducted by the U.S. Geological Survey?one national-scale survey of ground water, one national-scale survey of source water from ground water, and one regional-scale survey of drinking water from ground water. The overall detection frequency of MTBE in all three surveys was similar to the detection frequencies of some other volatile organic compounds (VOCs) that have much longer production and use histories in the United States. The detection frequency of MTBE was higher in drinking water and lower in source water and ground water. However, when the data for ground water and source water were limited to the same geographic extent as drinking-water data, the detection frequencies of MTBE were comparable to the detection frequency of MTBE in drinking water. In all three surveys, the detection frequency of any gasoline hydrocarbon was less than the detection frequency of MTBE. No concentration of MTBE in source water exceeded the lower limit of U.S. Environmental Protection Agency's Drinking-Water Advisory of 20 ?g/L (micrograms per liter). One concentration of MTBE in ground water exceeded 20 ?g/L, and 0.9 percent of drinking-water samples exceeded 20 ?g/L. The overall detection frequency of MTBE relative to other widely used VOCs indicates that MTBE is an important concern with respect to ground-water management. The probability of detecting MTBE was strongly associated with population density, use of MTBE in gasoline, and recharge, and weakly associated with density of leaking underground storage tanks, soil permeability, and aquifer consolidation. Only concentrations of MTBE above 0.5 ?g/L were associated with dissolved oxygen. Ground water underlying areas with high population density, ground water underlying areas where MTBE is used as a gasoline oxygenate, and ground water underlying areas with high recharge has a greater probability of MTBE contamination. Ground water from public-supply wells and shallow ground water underlying urban land-use areas has a greater probability of MTBE contamination compared to ground water from domestic wells and ground water underlying rural land-use areas.
Probability Issues in without Replacement Sampling
ERIC Educational Resources Information Center
Joarder, A. H.; Al-Sabah, W. S.
2007-01-01
Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multiple Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
Low Probability of Intercept Waveforms via Intersymbol Dither Performance Under Multipath Conditions
2009-03-01
United States Air Force, Department of Defense, or the United States Government . AFIT/GE/ENG/09-23 Low Probability of Intercept Waveforms via...21 D random variable governing the distribution of dither values 21 p (ct) D (t) probability density function of the...potential performance loss of a non-cooperative receiver compared to a cooperative receiver designed to account for ISI and multipath. 1.3 Thesis
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
Teleportation of entangled states without Bell-state measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoso, Wesley B.; Baseia, B.; Avelar, A.T.
2005-10-15
In a recent paper [Phys. Rev. A 70, 025803 (2004)] we presented a scheme to teleport an entanglement of zero- and one-photon states from a bimodal cavity to another one, with 100% success probability. Here, inspired by recent results in the literature, we have modified our previous proposal to teleport the same entangled state without using Bell-state measurements. For comparison, the time spent, the fidelity, and the success probability for this teleportation are considered.
Self-imposed length limits in recreational fisheries
Chizinski, Christopher J.; Martin, Dustin R.; Hurley, Keith L.; Pope, Kevin L.
2014-01-01
A primary motivating factor on the decision to harvest a fish among consumptive-orientated anglers is the size of the fish. There is likely a cost-benefit trade-off for harvest of individual fish that is size and species dependent, which should produce a logistic-type response of fish fate (release or harvest) as a function of fish size and species. We define the self-imposed length limit as the length at which a captured fish had a 50% probability of being harvested, which was selected because it marks the length of the fish where the probability of harvest becomes greater than the probability of release. We assessed the influences of fish size, catch per unit effort, size distribution of caught fish, and creel limit on the self-imposed length limits for bluegill Lepomis macrochirus, channel catfish Ictalurus punctatus, black crappie Pomoxis nigromaculatus and white crappie Pomoxis annularis combined, white bass Morone chrysops, and yellow perch Perca flavescens at six lakes in Nebraska, USA. As we predicted, the probability of harvest increased with increasing size for all species harvested, which supported the concept of a size-dependent trade-off in costs and benefits of harvesting individual fish. It was also clear that probability of harvest was not simply defined by fish length, but rather was likely influenced to various degrees by interactions between species, catch rate, size distribution, creel-limit regulation and fish size. A greater understanding of harvest decisions within the context of perceived likelihood that a creel limit will be realized by a given angler party, which is a function of fish availability, harvest regulation and angler skill and orientation, is needed to predict the influence that anglers have on fish communities and to allow managers to sustainable manage exploited fish populations in recreational fisheries.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nair, Ranjith
2011-09-15
We consider the problem of distinguishing, with minimum probability of error, two optical beam-splitter channels with unequal complex-valued reflectivities using general quantum probe states entangled over M signal and M' idler mode pairs of which the signal modes are bounced off the beam splitter while the idler modes are retained losslessly. We obtain a lower bound on the output state fidelity valid for any pure input state. We define number-diagonal signal (NDS) states to be input states whose density operator in the signal modes is diagonal in the multimode number basis. For such input states, we derive series formulas formore » the optimal error probability, the output state fidelity, and the Chernoff-type upper bounds on the error probability. For the special cases of quantum reading of a classical digital memory and target detection (for which the reflectivities are real valued), we show that for a given input signal photon probability distribution, the fidelity is minimized by the NDS states with that distribution and that for a given average total signal energy N{sub s}, the fidelity is minimized by any multimode Fock state with N{sub s} total signal photons. For reading of an ideal memory, it is shown that Fock state inputs minimize the Chernoff bound. For target detection under high-loss conditions, a no-go result showing the lack of appreciable quantum advantage over coherent state transmitters is derived. A comparison of the error probability performance for quantum reading of number state and two-mode squeezed vacuum state (or EPR state) transmitters relative to coherent state transmitters is presented for various values of the reflectances. While the nonclassical states in general perform better than the coherent state, the quantitative performance gains differ depending on the values of the reflectances. The experimental outlook for realizing nonclassical gains from number state transmitters with current technology at moderate to high values of the reflectances is argued to be good.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, O.; Departamento de Fisica, Facultad de Ciencias Basicas, Universidad de Antofagasta, Casilla 170, Antofagasta; Bergou, J.
We study the probabilistic cloning of three symmetric states. These states are defined by a single complex quantity, the inner product among them. We show that three different probabilistic cloning machines are necessary to optimally clone all possible families of three symmetric states. We also show that the optimal cloning probability of generating M copies out of one original can be cast as the quotient between the success probability of unambiguously discriminating one and M copies of symmetric states.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Biased growth processes and the ``rich-get-richer'' principle
NASA Astrophysics Data System (ADS)
de Moura, Alessandro P.
2004-05-01
We study a simple stochastic system with a “rich-get-richer” behavior, in which there are 2 states, and N particles that are successively assigned to one of the states, with a probability pi that depends on the states’ occupation ni as pi = nγi /( nγ1 + nγ2 ) . We show that there is a phase transition as γ crosses the critical value γc =1 . For γ<1 , in the thermodynamic limit the occupations are approximately the same, n1 ≈ n2 . For γ>1 , however, a spontaneous symmetry breaking occurs, and the system goes to a highly clustered configuration, in which one of the states has almost all the particles. These results also hold for any finite number of states (not only two). We show that this “rich-get-richer” principle governs the growth dynamics in a simple model of gravitational aggregation, and we argue that the same is true in all growth processes mediated by long-range forces like gravity.
Yin, Yuan; Shi, Deheng; Sun, Jinfeng; Zhu, Zunlue
2017-11-02
This work investigates the spectroscopic parameters, vibrational levels, and transition probabilities of 12 low-lying states, which are generated from the first dissociation limit, Br( 2 P u ) + O - ( 2 P u ), of the BrO - anion. The 12 states are X 1 Σ + , 2 1 Σ + , 1 1 Σ - , 1 1 Π, 2 1 Π, 1 1 Δ, a 3 Π, 1 3 Σ + , 2 3 Σ + , 1 3 Σ - , 2 3 Π, and 1 3 Δ. The potential energy curves are calculated with the complete active-space self-consistent field method, which is followed by the internally contracted multireference configuration interaction approach with Davidson modification. The dissociation energy D 0 of X 1 Σ + state is determined to be approximately 26876.44 cm -1 , which agrees well with the experimental one of 26494.50 cm -1 . Of these 12 states, the 2 1 Σ + , 1 1 Σ - , 2 1 Π, 1 1 Δ, 1 3 Σ + , 2 3 Σ + , 2 3 Π, and 1 3 Δ states are very weakly bound states, whose well depths are only several-hundred cm -1 . The a 3 Π, 2 3 Π, and 1 3 Δ states are inverted and account for the spin-orbit coupling effect. No states are repulsive regardless of whether the spin-orbit coupling effect is included. The spectroscopic parameters and vibrational levels are determined. The transition dipole moments of 12-pair electronic states are calculated. Franck-Condon factors of a number of transitions of more than 20-pair electronic states are evaluated. The electronic transitions are discussed. The spin-orbit coupling effect on the spectroscopic parameters and vibrational properties is profound for all the states except for X 1 Σ + , a 3 Π, and 1 1 Π. The spectroscopic parameters and transition probabilities obtained in this paper can provide some powerful guidelines for observing these states in a proper spectroscopy experiment, in particular the states that have very shallow potential wells.
Statistical methods for incomplete data: Some results on model misspecification.
McIsaac, Michael; Cook, R J
2017-02-01
Inverse probability weighted estimating equations and multiple imputation are two of the most studied frameworks for dealing with incomplete data in clinical and epidemiological research. We examine the limiting behaviour of estimators arising from inverse probability weighted estimating equations, augmented inverse probability weighted estimating equations and multiple imputation when the requisite auxiliary models are misspecified. We compute limiting values for settings involving binary responses and covariates and illustrate the effects of model misspecification using simulations based on data from a breast cancer clinical trial. We demonstrate that, even when both auxiliary models are misspecified, the asymptotic biases of double-robust augmented inverse probability weighted estimators are often smaller than the asymptotic biases of estimators arising from complete-case analyses, inverse probability weighting or multiple imputation. We further demonstrate that use of inverse probability weighting or multiple imputation with slightly misspecified auxiliary models can actually result in greater asymptotic bias than the use of naïve, complete case analyses. These asymptotic results are shown to be consistent with empirical results from simulation studies.
Hansstein, Francesca V
2016-03-01
To investigate how breastfeeding initiation and duration affect the likelihood of being overweight and obese in children aged 2 to 5. Cross-sectional data from the 2003 National Survey of Children's Health. Rural and urban areas of the United States. Households where at least one member was between the ages of 2 and 5 (sample size 8207). Parent-reported body mass index, breastfeeding initiation and duration, covariates (gender, family income and education, ethnicity, child care attendance, maternal health and physical activity, residential area). Partial proportional odds models. In early childhood, breastfed children had 5.3% higher probability of being normal weight (p = .002) and 8.9% (p < .001) lower probability of being obese compared to children who had never been breastfed. Children who had been breastfed for less than 3 months had 3.1% lower probability of being normal weight (p = .013) and 4.7% higher probability of being obese (p = .013) with respect to children who had been breastfed for 3 months and above. Study findings suggest that length of breastfeeding, whether exclusive or not, may be associated with lower risk of obesity in early childhood. However, caution is needed in generalizing results because of the limitations of the analysis. Based on findings from this study and others, breastfeeding promotion policies can cite the potential protective effect that breastfeeding has on weight in early childhood. © The Author(s) 2016.
An historical perspective on variety in United States dining based on menus.
Meiselman, Herbert L
2017-11-01
While food variety continues to be of major interest to those studying eating and health, research has been mainly limited to laboratory research of simple meals. This paper seeks to enlarge the scope of eating research by examining the food offered in the earliest menus in United States restaurants and hotels of the early and mid-19th c, when restaurants began. This reveals a very large variety in what food was offered. The paper discusses why variety has declined in the US and probably elsewhere, including changes in the customer, changes in food service, changes of food availability, and the industrialization of the food supply. Menu analysis offers another approach to studying dietary variety across cultures and across time. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mean-Potential Law in Evolutionary Games.
Nałęcz-Jawecki, Paweł; Miękisz, Jacek
2018-01-12
The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1/3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.
Grid Resolution Study over Operability Space for a Mach 1.7 Low Boom External Compression Inlet
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.
2014-01-01
This paper presents a statistical methodology whereby the probability limits associated with CFD grid resolution of inlet flow analysis can be determined which provide quantitative information on the distribution of that error over the specified operability range. The objectives of this investigation is to quantify the effects of both random (accuracy) and systemic (biasing) errors associated with grid resolution in the analysis of the Lockheed Martin Company (LMCO) N+2 Low Boom external compression supersonic inlet. The study covers the entire operability space as defined previously by the High Speed Civil Transport (HSCT) High Speed Research (HSR) program goals. The probability limits in terms of a 95.0% confidence interval on the analysis data were evaluated for four ARP1420 inlet metrics, namely (1) total pressure recovery (PFAIP), (2) radial hub distortion (DPH/P), (3) ) radial tip distortion (DPT/P), and (4) ) circumferential distortion (DPC/P). In general, the resulting +/-0.95 delta Y interval was unacceptably large in comparison to the stated goals of the HSCT program. Therefore, the conclusion was reached that the "standard grid" size was insufficient for this type of analysis. However, in examining the statistical data, it was determined that the CFD analysis results at the outer fringes of the operability space were the determining factor in the measure of statistical uncertainty. Adequate grids are grids that are free of biasing (systemic) errors and exhibit low random (precision) errors in comparison to their operability goals. In order to be 100% certain that the operability goals have indeed been achieved for each of the inlet metrics, the Y+/-0.95 delta Y limit must fall inside the stated operability goals. For example, if the operability goal for DPC/P circumferential distortion is =0.06, then the forecast Y for DPC/P plus the 95% confidence interval on DPC/P, i.e. +/-0.95 delta Y, must all be less than or equal to 0.06.
Parallel Low-Loss Measurement of Multiple Atomic Qubits
NASA Astrophysics Data System (ADS)
Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.
2017-11-01
We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.
A tool for simulating collision probabilities of animals with marine renewable energy devices.
Schmitt, Pál; Culloch, Ross; Lieber, Lilian; Molander, Sverker; Hammar, Linus; Kregting, Louise
2017-01-01
The mathematical problem of establishing a collision probability distribution is often not trivial. The shape and motion of the animal as well as of the the device must be evaluated in a four-dimensional space (3D motion over time). Earlier work on wind and tidal turbines was limited to a simplified two-dimensional representation, which cannot be applied to many new structures. We present a numerical algorithm to obtain such probability distributions using transient, three-dimensional numerical simulations. The method is demonstrated using a sub-surface tidal kite as an example. Necessary pre- and post-processing of the data created by the model is explained, numerical details and potential issues and limitations in the application of resulting probability distributions are highlighted.
Implications of Cognitive Load for Hypothesis Generation and Probability Judgment
Sprenger, Amber M.; Dougherty, Michael R.; Atkins, Sharona M.; Franco-Watkins, Ana M.; Thomas, Rick P.; Lange, Nicholas; Abbs, Brandon
2011-01-01
We tested the predictions of HyGene (Thomas et al., 2008) that both divided attention at encoding and judgment should affect the degree to which participants’ probability judgments violate the principle of additivity. In two experiments, we showed that divided attention during judgment leads to an increase in subadditivity, suggesting that the comparison process for probability judgments is capacity limited. Contrary to the predictions of HyGene, a third experiment revealed that divided attention during encoding leads to an increase in later probability judgment made under full attention. The effect of divided attention during encoding on judgment was completely mediated by the number of hypotheses participants generated, indicating that limitations in both encoding and recall can cascade into biases in judgments. PMID:21734897
Probabilistic finite elements for fatigue and fracture analysis
NASA Astrophysics Data System (ADS)
Belytschko, Ted; Liu, Wing Kam
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Complex Relationships Between Food, Diet, and the Microbiome.
Pace, Laura A; Crowe, Sheila E
2016-06-01
Diet is a risk factor in several medically important disease states, including obesity, celiac disease, and functional gastrointestinal disorders. Modification of diet can prevent, treat, or alleviate some of the symptoms associated with these diseases and improve general health. It is important to provide patients with simple dietary recommendations to increase the probability of successful implementation. These recommendations include increasing vegetable, fruit, and fiber intake, consuming lean protein sources to enhance satiety, avoiding or severely limiting highly processed foods, and reducing portion sizes for overweight and obese patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Probabilistic finite elements for fatigue and fracture analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Liu, Wing Kam
1992-01-01
Attenuation is focused on the development of Probabilistic Finite Element Method (PFEM), which combines the finite element method with statistics and reliability methods, and its application to linear, nonlinear structural mechanics problems and fracture mechanics problems. The computational tool based on the Stochastic Boundary Element Method is also given for the reliability analysis of a curvilinear fatigue crack growth. The existing PFEM's have been applied to solve for two types of problems: (1) determination of the response uncertainty in terms of the means, variance and correlation coefficients; and (2) determination the probability of failure associated with prescribed limit states.
Of Detection Limits and Effective Mitigation: The Use of Infrared Cameras for Methane Leak Detection
NASA Astrophysics Data System (ADS)
Ravikumar, A. P.; Wang, J.; McGuire, M.; Bell, C.; Brandt, A. R.
2017-12-01
Mitigating methane emissions, a short-lived and potent greenhouse gas, is critical to limiting global temperature rise to two degree Celsius as outlined in the Paris Agreement. A major source of anthropogenic methane emissions in the United States is the oil and gas sector. To this effect, state and federal governments have recommended the use of optical gas imaging systems in periodic leak detection and repair (LDAR) surveys to detect for fugitive emissions or leaks. The most commonly used optical gas imaging systems (OGI) are infrared cameras. In this work, we systematically evaluate the limits of infrared (IR) camera based OGI system for use in methane leak detection programs. We analyze the effect of various parameters that influence the minimum detectable leak rates of infrared cameras. Blind leak detection tests were carried out at the Department of Energy's MONITOR natural gas test-facility in Fort Collins, CO. Leak sources included natural gas wellheads, separators, and tanks. With an EPA mandated 60 g/hr leak detection threshold for IR cameras, we test leak rates ranging from 4 g/hr to over 350 g/hr at imaging distances between 5 ft and 70 ft from the leak source. We perform these experiments over the course of a week, encompassing a wide range of wind and weather conditions. Using repeated measurements at a given leak rate and imaging distance, we generate detection probability curves as a function of leak-size for various imaging distances, and measurement conditions. In addition, we estimate the median detection threshold - leak-size at which the probability of detection is 50% - under various scenarios to reduce uncertainty in mitigation effectiveness. Preliminary analysis shows that the median detection threshold varies from 3 g/hr at an imaging distance of 5 ft to over 150 g/hr at 50 ft (ambient temperature: 80 F, winds < 4 m/s). Results from this study can be directly used to improve OGI based LDAR protocols and reduce uncertainty in estimated mitigation effectiveness. Furthermore, detection limits determined in this study can be used as standards to compare new detection technologies.
Probabilistically Perfect Cloning of Two Pure States: Geometric Approach.
Yerokhin, V; Shehu, A; Feldman, E; Bagan, E; Bergou, J A
2016-05-20
We solve the long-standing problem of making n perfect clones from m copies of one of two known pure states with minimum failure probability in the general case where the known states have arbitrary a priori probabilities. The solution emerges from a geometric formulation of the problem. This formulation reveals that cloning converges to state discrimination followed by state preparation as the number of clones goes to infinity. The convergence exhibits a phenomenon analogous to a second-order symmetry-breaking phase transition.
Preformation probability inside α emitters around the shell closures Z = 50 and N = 82
NASA Astrophysics Data System (ADS)
Seif, W. M.; Ismail, M.; Zeini, E. T.
2017-05-01
The preformation of an α-particle as a distinct entity inside the α-emitter is the first move towards α-decay. We investigate the α-particle preformation probability (S α ) in ordinary and exotic α-decays. We consider favored and unfavored decays at which the α-emitters and the produced daughter nuclides are in their ground or isomeric states. The study of 244 α-decay modes with 52≤slant Z≤slant 81 and 53≤slant N≤slant 112 is accomplished using the preformed cluster model. The preformation probabilities were estimated from the experimental half-lives and the computed decay widths based on the Wentzel-Kramers-Brillouin tunneling penetrability and knocking frequency, and the Skyrme-SLy4 interaction potential. We found that the favored α-decay mode from a ground state to an isomeric state shows larger α-preformation probability than the favored and unfavored decays of the same isotope but from isomeric to ground states. The favored decay mode from isomeric- to ground-state exhibits rather less S α relative to the other decay modes from the same nuclide. The favored decay modes between two isomeric states tend to yield larger S α and less partial half-life compared with the favored and unfavored decays from the same nuclides but between two ground states. For the decays involving two ground states, the preformation probability is larger for the favored decay modes than for the unfavored ones. The unfavored α-decay modes from ground- to isomeric-states are rare. The unfavored decay modes from isomeric- to ground-states show less S α than that for the favored decays from the ground states of the same emitters. The unfavored α-decay modes between two isomeric states exhibit larger S α than the other α-decay modes from the same isomers.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-01-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing. Images FIGURE 3 PMID:8913581
ERIC Educational Resources Information Center
Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy
2006-01-01
We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…
NASA Astrophysics Data System (ADS)
Fuchs, Christopher A.; Schack, Rüdiger
2013-10-01
In the quantum-Bayesian interpretation of quantum theory (or QBism), the Born rule cannot be interpreted as a rule for setting measurement-outcome probabilities from an objective quantum state. But if not, what is the role of the rule? In this paper, the argument is given that it should be seen as an empirical addition to Bayesian reasoning itself. Particularly, it is shown how to view the Born rule as a normative rule in addition to usual Dutch-book coherence. It is a rule that takes into account how one should assign probabilities to the consequences of various intended measurements on a physical system, but explicitly in terms of prior probabilities for and conditional probabilities consequent upon the imagined outcomes of a special counterfactual reference measurement. This interpretation is exemplified by representing quantum states in terms of probabilities for the outcomes of a fixed, fiducial symmetric informationally complete measurement. The extent to which the general form of the new normative rule implies the full state-space structure of quantum mechanics is explored.
Gonzalez-Gutierrez, Giovanni; Lukk, Tiit; Agarwal, Vinayak; Papke, David; Nair, Satish K.; Grosman, Claudio
2012-01-01
The determination of structural models of the various stable states of an ion channel is a key step toward the characterization of its conformational dynamics. In the case of nicotinic-type receptors, different structures have been solved but, thus far, these different models have been obtained from different members of the superfamily. In the case of the bacterial member ELIC, a cysteamine-gated channel from Erwinia chrisanthemi, a structural model of the protein in the absence of activating ligand (and thus, conceivably corresponding to the closed state of this channel) has been previously generated. In this article, electrophysiological characterization of ELIC mutants allowed us to identify pore mutations that slow down the time course of desensitization to the extent that the channel seems not to desensitize at all for the duration of the agonist applications (>20 min). Thus, it seems reasonable to conclude that the probability of ELIC occupying the closed state is much lower for the ligand-bound mutants than for the unliganded wild-type channel. To gain insight into the conformation adopted by ELIC under these conditions, we solved the crystal structures of two of these mutants in the presence of a concentration of cysteamine that elicits an intracluster open probability of >0.9. Curiously, the obtained structural models turned out to be nearly indistinguishable from the model of the wild-type channel in the absence of bound agonist. Overall, our findings bring to light the limited power of functional studies in intact membranes when it comes to inferring the functional state of a channel in a crystal, at least in the case of the nicotinic-receptor superfamily. PMID:22474383
Surveillance for Q Fever Endocarditis in the United States, 1999-2015.
Straily, Anne; Dahlgren, F Scott; Peterson, Amy; Paddock, Christopher D
2017-11-13
Q fever is a worldwide zoonosis caused by Coxiella burnetii. In some persons, particularly those with cardiac valve disease, infection with C. burnetii can cause a life-threatening infective endocarditis. There are few descriptive analyses of Q fever endocarditis in the United States. Q fever case report forms submitted during 1999-2015 were reviewed to identify reports describing endocarditis. Cases were categorized as confirmed or probable using criteria defined by the Council for State and Territorial Epidemiologists (CSTE). Demographic, laboratory, and clinical data were analyzed. Of 140 case report forms reporting endocarditis, 49 met the confirmed definition and 36 met the probable definition. Eighty-two percent were male and the median age was 57 years (range, 16-87 years). Sixty-seven patients (78.8%) were hospitalized, and 5 deaths (5.9%) were reported. Forty-five patients (52.9%) had a preexisting valvulopathy. Eight patients with endocarditis had phase I immunoglobulin G antibody titers >800 but did not meet the CSTE case definition for Q fever endocarditis. These data summarize a limited set of clinical and epidemiological features of Q fever endocarditis collected through passive surveillance in the United States. Some cases of apparent Q fever endocarditis could not be classified by CSTE laboratory criteria, suggesting that comparison of phase I and phase II titers could be reexamined as a surveillance criterion. Prospective analyses of culture-negative endocarditis are needed to better assess the clinical spectrum and magnitude of Q fever endocarditis in the United States. Published by Oxford University Press for the Infectious Diseases Society of America 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris; Khalil, Mohammad; Sarkar, Abhijit
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid-structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic system leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib-Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.
Bayesian inference of nonlinear unsteady aerodynamics from aeroelastic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Rimple; Poirel, Dominique; Pettit, Chris
2016-07-01
A Bayesian model selection and parameter estimation algorithm is applied to investigate the influence of nonlinear and unsteady aerodynamic loads on the limit cycle oscillation (LCO) of a pitching airfoil in the transitional Reynolds number regime. At small angles of attack, laminar boundary layer trailing edge separation causes negative aerodynamic damping leading to the LCO. The fluid–structure interaction of the rigid, but elastically mounted, airfoil and nonlinear unsteady aerodynamics is represented by two coupled nonlinear stochastic ordinary differential equations containing uncertain parameters and model approximation errors. Several plausible aerodynamic models with increasing complexity are proposed to describe the aeroelastic systemmore » leading to LCO. The likelihood in the posterior parameter probability density function (pdf) is available semi-analytically using the extended Kalman filter for the state estimation of the coupled nonlinear structural and unsteady aerodynamic model. The posterior parameter pdf is sampled using a parallel and adaptive Markov Chain Monte Carlo (MCMC) algorithm. The posterior probability of each model is estimated using the Chib–Jeliazkov method that directly uses the posterior MCMC samples for evidence (marginal likelihood) computation. The Bayesian algorithm is validated through a numerical study and then applied to model the nonlinear unsteady aerodynamic loads using wind-tunnel test data at various Reynolds numbers.« less
Who cares? A comparison of informal and formal care provision in Spain, England and the USA
SOLÉ-AURÓ, AÏDA; CRIMMINS, EILEEN M.
2013-01-01
This paper investigates the prevalence of incapacity in performing daily activities and the associations between household composition and availability of family members and receipt of care among older adults with functioning problems in Spain, England and the United States of America (USA). We examine how living arrangements, marital status, child availability, limitations in functioning ability, age and gender affect the probability of receiving formal care and informal care from household members and from others in three countries with different family structures, living arrangements and policies supporting care of the incapacitated. Data sources include the 2006 Survey of Health, Ageing and Retirement in Europe for Spain, the third wave of the English Longitudinal Study of Ageing (2006), and the eighth wave of the USA Health and Retirement Study (2006). Logistic and multinomial logistic regressions are used to estimate the probability of receiving care and the sources of care among persons age 50 and older. The percentage of people with functional limitations receiving care is higher in Spain. More care comes from outside the household in the USA and England than in Spain. The use of formal care among the incapacitated is lowest in the USA and highest in Spain. PMID:24550574
Search for b→u transitions in B ±→[K ∓π ±π⁰] DK ± decays
Lees, J. P.; Poireau, V.; Tisserand, V.; ...
2011-07-06
We present a study of the decays B ±→DK ± with D mesons reconstructed in the K⁺π⁻π⁰ or K⁻π⁺π⁰ final states, where D indicates a D⁰ or a D¯¯¯0 meson. Using a sample of 474×10⁶ BB¯¯¯ pairs collected with the BABAR detector at the PEP-II asymmetric-energy e⁺e⁻ collider at SLAC, we measure the ratios R ±≡((Γ(B ±→[K ∓π ±π⁰]DK ±))/((Γ(B ±→[K ±π ∓π⁰]DK ±)). We obtain R⁺=(5 ⁺12 ⁻10(stat) ⁺2 ⁻4(syst))×10⁻³ and R⁻=(12 ⁺12 ⁻10(stat) ⁺3 ⁻5(syst))×10⁻³, from which we extract the upper limits at 90% probability: R⁺<23×10⁻³ and R⁻<29×10⁻³. Using these measurements, we obtain an upper limit for themore » ratio r B of the magnitudes of the b→u and b→c amplitudes r B<0.13 at 90% probability.« less
Wen, Hefei; Hockenberry, Jason M; Druss, Benjamin G
2018-05-16
Marijuana liberalization policies are gaining momentum in the USA, coupled with limited federal interference and growing dispensary industry. This evolving regulatory landscape underscores the importance of understanding the attitudinal/perceptual pathways from marijuana policy to marijuana use behavior, especially for adolescents and young adults. Our study uses the restricted-access National Survey on Drug Use and Health (NSDUH) 2004-2012 data and a difference-in-differences design to compare the pre-policy, post-policy changes in marijuana-related attitude/perception between adolescents and young adults from ten states that implemented medical marijuana laws during the study period and those from the remaining states. We examined four attitudinal/perception pathways that may play a role in adolescent and young adult marijuana use behavior, including (1) perceived availability of marijuana, (2) perceived acceptance of marijuana use, (3) perceived wrongfulness of recreational marijuana use, and (4) perceived harmfulness of marijuana use. We found that state implementation of medical marijuana laws between 2004 and 2012 was associated with a 4.72% point increase (95% CI 0.15, 9.28) in the probability that young adults perceived no/low health risk related to marijuana use. Medical marijuana law implementation is also associated with a 0.37% point decrease (95% CI - 0.72, - 0.03) in the probability that adolescents perceived parental acceptance of marijuana use. As more states permit medical marijuana use, marijuana-related attitude/perception need to be closely monitored, especially perceived harmfulness. The physical and psychological effects of marijuana use should be carefully investigated and clearly conveyed to the public.
Capture-recapture analysis for estimating manatee reproductive rates
Kendall, W.L.; Langtimm, C.A.; Beck, C.A.; Runge, M.C.
2004-01-01
Modeling the life history of the endangered Florida manatee (Trichechus manatus latirostris) is an important step toward understanding its population dynamics and predicting its response to management actions. We developed a multi-state mark-resighting model for data collected under Pollock's robust design. This model estimates breeding probability conditional on a female's breeding state in the previous year; assumes sighting probability depends on breeding state; and corrects for misclassification of a cow with first-year calf, by estimating conditional sighting probability for the calf. The model is also appropriate for estimating survival and unconditional breeding probabilities when the study area is closed to temporary emigration across years. We applied this model to photo-identification data for the Northwest and Atlantic Coast populations of manatees, for years 1982?2000. With rare exceptions, manatees do not reproduce in two consecutive years. For those without a first-year calf in the previous year, the best-fitting model included constant probabilities of producing a calf for the Northwest (0.43, SE = 0.057) and Atlantic (0.38, SE = 0.045) populations. The approach we present to adjust for misclassification of breeding state could be applicable to a large number of marine mammal populations.
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
NASA Astrophysics Data System (ADS)
Herzog, Ulrike; Bergou, János A.
2006-04-01
Based on our previous publication [U. Herzog and J. A. Bergou, Phys. Rev. A 71, 050301(R)(2005)] we investigate the optimum measurement for the unambiguous discrimination of two mixed quantum states that occur with given prior probabilities. Unambiguous discrimination of nonorthogonal states is possible in a probabilistic way, at the expense of a nonzero probability of inconclusive results, where the measurement fails. Along with a discussion of the general problem, we give an example illustrating our method of solution. We also provide general inequalities for the minimum achievable failure probability and discuss in more detail the necessary conditions that must be fulfilled when its absolute lower bound, proportional to the fidelity of the states, can be reached.
NASA Astrophysics Data System (ADS)
Wei, Jiahua; Shi, Lei; Luo, Junwen; Zhu, Yu; Kang, Qiaoyan; Yu, Longqiang; Wu, Hao; Jiang, Jun; Zhao, Boxin
2018-06-01
In this paper, we present an efficient scheme for remote state preparation of arbitrary n-qubit states with real coefficients. Quantum channel is composed of n maximally two-qubit entangled states, and several appropriate mutually orthogonal bases including the real parameters of prepared states are delicately constructed without the introduction of auxiliary particles. It is noted that the successful probability is 100% by using our proposal under the condition that the parameters of prepared states are all real. Compared to general states, the probability of our protocol is improved at the cost of the information reduction in the transmitted state.
Making great leaps forward: Accounting for detectability in herpetological field studies
Mazerolle, Marc J.; Bailey, Larissa L.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Nichols, James D.
2007-01-01
Detecting individuals of amphibian and reptile species can be a daunting task. Detection can be hindered by various factors such as cryptic behavior, color patterns, or observer experience. These factors complicate the estimation of state variables of interest (e.g., abundance, occupancy, species richness) as well as the vital rates that induce changes in these state variables (e.g., survival probabilities for abundance; extinction probabilities for occupancy). Although ad hoc methods (e.g., counts uncorrected for detection, return rates) typically perform poorly in the face of no detection, they continue to be used extensively in various fields, including herpetology. However, formal approaches that estimate and account for the probability of detection, such as capture-mark-recapture (CMR) methods and distance sampling, are available. In this paper, we present classical approaches and recent advances in methods accounting for detectability that are particularly pertinent for herpetological data sets. Through examples, we illustrate the use of several methods, discuss their performance compared to that of ad hoc methods, and we suggest available software to perform these analyses. The methods we discuss control for imperfect detection and reduce bias in estimates of demographic parameters such as population size, survival, or, at other levels of biological organization, species occurrence. Among these methods, recently developed approaches that no longer require marked or resighted individuals should be particularly of interest to field herpetologists. We hope that our effort will encourage practitioners to implement some of the estimation methods presented herein instead of relying on ad hoc methods that make more limiting assumptions.
H theorem for generalized entropic forms within a master-equation framework
NASA Astrophysics Data System (ADS)
Casas, Gabriela A.; Nobre, Fernando D.; Curado, Evaldo M. F.
2016-03-01
The H theorem is proven for generalized entropic forms, in the case of a discrete set of states. The associated probability distributions evolve in time according to a master equation, for which the corresponding transition rates depend on these entropic forms. An important equation describing the time evolution of the transition rates and probabilities in such a way as to drive the system towards an equilibrium state is found. In the particular case of Boltzmann-Gibbs entropy, it is shown that this equation is satisfied in the microcanonical ensemble only for symmetric probability transition rates, characterizing a single path to the equilibrium state. This equation fulfils the proof of the H theorem for generalized entropic forms, associated with systems characterized by complex dynamics, e.g., presenting nonsymmetric probability transition rates and more than one path towards the same equilibrium state. Some examples considering generalized entropies of the literature are discussed, showing that they should be applicable to a wide range of natural phenomena, mainly those within the realm of complex systems.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Two statistical mechanics aspects of complex networks
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Biely, Christoly
2006-12-01
By adopting an ensemble interpretation of non-growing rewiring networks, network theory can be reduced to a counting problem of possible network states and an identification of their associated probabilities. We present two scenarios of how different rewirement schemes can be used to control the state probabilities of the system. In particular, we review how by generalizing the linking rules of random graphs, in combination with superstatistics and quantum mechanical concepts, one can establish an exact relation between the degree distribution of any given network and the nodes’ linking probability distributions. In a second approach, we control state probabilities by a network Hamiltonian, whose characteristics are motivated by biological and socio-economical statistical systems. We demonstrate that a thermodynamics of networks becomes a fully consistent concept, allowing to study e.g. ‘phase transitions’ and computing entropies through thermodynamic relations.
Tsirelson's bound and supersymmetric entangled states
Borsten, L.; Brádler, K.; Duff, M. J.
2014-01-01
A superqubit, belonging to a (2|1)-dimensional super-Hilbert space, constitutes the minimal supersymmetric extension of the conventional qubit. In order to see whether superqubits are more non-local than ordinary qubits, we construct a class of two-superqubit entangled states as a non-local resource in the CHSH game. Since super Hilbert space amplitudes are Grassmann numbers, the result depends on how we extract real probabilities and we examine three choices of map: (1) DeWitt (2) Trigonometric and (3) Modified Rogers. In cases (1) and (2), the winning probability reaches the Tsirelson bound pwin=cos2π/8≃0.8536 of standard quantum mechanics. Case (3) crosses Tsirelson's bound with pwin≃0.9265. Although all states used in the game involve probabilities lying between 0 and 1, case (3) permits other changes of basis inducing negative transition probabilities. PMID:25294964
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
NASA Astrophysics Data System (ADS)
Khandkar, Mahendra D.; Stinchcombe, Robin; Barma, Mustansir
2017-01-01
We demonstrate the large-scale effects of the interplay between shape and hard-core interactions in a system with left- and right-pointing arrowheads <> on a line, with reorientation dynamics. This interplay leads to the formation of two types of domain walls, >< (A ) and <> (B ). The correlation length in the equilibrium state diverges exponentially with increasing arrowhead density, with an ordered state of like orientations arising in the limit. In this high-density limit, the A domain walls diffuse, while the B walls are static. In time, the approach to the ordered state is described by a coarsening process governed by the kinetics of domain-wall annihilation A +B →0 , quite different from the A +A →0 kinetics pertinent to the Glauber-Ising model. The survival probability of a finite set of walls is shown to decay exponentially with time, in contrast to the power-law decay known for A +A →0 . In the thermodynamic limit with a finite density of walls, coarsening as a function of time t is studied by simulation. While the number of walls falls as t-1/2, the fraction of persistent arrowheads decays as t-θ where θ is close to 1/4 , quite different from the Ising value. The global persistence too has θ =1/4 , as follows from a heuristic argument. In a generalization where the B walls diffuse slowly, θ varies continuously, increasing with increasing diffusion constant.
Khandkar, Mahendra D; Stinchcombe, Robin; Barma, Mustansir
2017-01-01
We demonstrate the large-scale effects of the interplay between shape and hard-core interactions in a system with left- and right-pointing arrowheads <> on a line, with reorientation dynamics. This interplay leads to the formation of two types of domain walls, >< (A) and <> (B). The correlation length in the equilibrium state diverges exponentially with increasing arrowhead density, with an ordered state of like orientations arising in the limit. In this high-density limit, the A domain walls diffuse, while the B walls are static. In time, the approach to the ordered state is described by a coarsening process governed by the kinetics of domain-wall annihilation A+B→0, quite different from the A+A→0 kinetics pertinent to the Glauber-Ising model. The survival probability of a finite set of walls is shown to decay exponentially with time, in contrast to the power-law decay known for A+A→0. In the thermodynamic limit with a finite density of walls, coarsening as a function of time t is studied by simulation. While the number of walls falls as t^{-1/2}, the fraction of persistent arrowheads decays as t^{-θ} where θ is close to 1/4, quite different from the Ising value. The global persistence too has θ=1/4, as follows from a heuristic argument. In a generalization where the B walls diffuse slowly, θ varies continuously, increasing with increasing diffusion constant.
Behavioral connectivity among bighorn sheep suggests potential for disease spread
Borg, Nathan J.; Mitchell, Michael S.; Lukacs, Paul M.; Mack, Curt M.; Waits, Lisette P.; Krausman, Paul R.
2017-01-01
Connectivity is important for population persistence and can reduce the potential for inbreeding depression. Connectivity between populations can also facilitate disease transmission; respiratory diseases are one of the most important factors affecting populations of bighorn sheep (Ovis canadensis). The mechanisms of connectivity in populations of bighorn sheep likely have implications for spread of disease, but the behaviors leading to connectivity between bighorn sheep groups are not well understood. From 2007–2012, we radio-collared and monitored 56 bighorn sheep in the Salmon River canyon in central Idaho. We used cluster analysis to define social groups of bighorn sheep and then estimated connectivity between these groups using a multi-state mark-recapture model. Social groups of bighorn sheep were spatially segregated and linearly distributed along the Salmon River canyon. Monthly probabilities of movement between adjacent male and female groups ranged from 0.08 (±0.004 SE) to 0.76 (±0.068) for males and 0.05 (±0.132) to 0.24 (±0.034) for females. Movements of males were extensive and probabilities of movement were considerably higher during the rut. Probabilities of movement for females were typically smaller than those of males and did not change seasonally. Whereas adjacent groups of bighorn sheep along the Salmon River canyon were well connected, connectivity between groups north and south of the Salmon River was limited. The novel application of a multi-state model to a population of bighorn sheep allowed us to estimate the probability of movement between adjacent social groups and approximate the level of connectivity across the population. Our results suggest high movement rates of males during the rut are the most likely to result in transmission of pathogens among both male and female groups. Potential for disease spread among female groups was smaller but non-trivial. Land managers can plan grazing of domestic sheep for spring and summer months when males are relatively inactive. Removal or quarantine of social groups may reduce probability of disease transmission in populations of bighorn sheep consisting of linearly distributed social groups.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Reactive Resonances in N+N2 Exchange Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Huo, Winifred M.; Dateo, Christopher E.; Schwenke, David W.; Stallcop, James R.
2003-01-01
Rich reactive resonances are found in a 3D quantum dynamics study of the N + N2 exchange reaction using a recently developed ab initio potential energy surface. This surface is characterized by a feature in the interaction region called Lake Eyring , that is, two symmetric transition states with a shallow minimum between them. An L2 analysis of the quasibound states associated with the shallow minimum confirms that the quasibound states associated with oscillations in all three degrees of freedom in Lake Eyring are responsible for the reactive resonances in the state-to-state reaction probabilities. The quasibound states, mostly the bending motions, give rise to strong reasonance peaks, whereas other motions contribute to the bumps and shoulders in the resonance structure. The initial state reaction probability further proves that the bending motions are the dominating factors of the reaction probability and have longer life times than the stretching motions. This is the first observation of reactive resonances from a "Lake Eyring" feature in a potential energy surface.
NASA Astrophysics Data System (ADS)
Endo, Takako; Konno, Norio; Obuse, Hideaki; Segawa, Etsuo
2017-11-01
In this paper, we treat quantum walks in a two-dimensional lattice with cutting edges along a straight boundary introduced by Asboth and Edge (2015 Phys. Rev. A 91 022324) in order to study one-dimensional edge states originating from topological phases of matter and to obtain collateral evidence of how a quantum walker reacts to the boundary. Firstly, we connect this model to the CMV matrix, which provides a 5-term recursion relation of the Laurent polynomial associated with spectral measure on the unit circle. Secondly, we explicitly derive the spectra of bulk and edge states of the quantum walk with the boundary using spectral analysis of the CMV matrix. Thirdly, while topological numbers of the model studied so far are well-defined only when gaps in the bulk spectrum exist, we find a new topological number defined only when there are no gaps in the bulk spectrum. We confirm that the existence of the spectrum for edge states derived from the CMV matrix is consistent with the prediction from a bulk-edge correspondence using topological numbers calculated in the cases where gaps in the bulk spectrum do or do not exist. Finally, we show how the edge states contribute to the asymptotic behavior of the quantum walk through limit theorems of the finding probability. Conversely, we also propose a differential equation using this limit distribution whose solution is the underlying edge state.
Ultimate fate of constrained voters
NASA Astrophysics Data System (ADS)
Vazquez, F.; Redner, S.
2004-09-01
We examine the ultimate fate of individual opinions in a socially interacting population of leftists, centrists and rightists. In an elemental interaction between agents, a centrist and a leftist can both become centrists or both become leftists with equal rates (and similarly for a centrist and a rightist). However leftists and rightists do not interact. This interaction step between pairs of agents is applied repeatedly until the system can no longer evolve. In the mean-field limit, we determine the exact probability that the system reaches consensus (either leftist, rightist or centrist) or a frozen mixture of leftists and rightists as a function of the initial composition of the population. We also determine the mean time until the final state is reached. Some implications of our results for the ultimate fate in a limit of the Axelrod model are discussed.
Dixit, Sumita; Das, Mukul
2012-10-01
The susceptibility of trans-fat to the human health risk prompted the Food and Agriculture Organization (FAO) and World Health Organization (WHO) to prepare regulations or compulsory claims for trans-fatty acids (TFA) in edible oils and fats. In this study, analysis of fatty acid composition and TFA content in edible oils and fats along with the possible intake of trans-fat in Indian population was carried out. The analysis was carried out as per the Assn. of Official Analytical Chemists (AOAC) methodology and the results were statistically analyzed. The average TFA content in nonrefined mustard and refined soybean oils exceeded by 1.16- to 1.64-fold as compared to the Denmark limit of 2% TFA in fats and oils destined for human consumption. In branded/nonbranded butter and butter oil samples, average TFA limit exceeded by 4.2- to 9.5-fold whereas hydrogenated vegetable oil (HVO) samples exceeded the limit by 9.8-fold, when compared to Denmark standards. The probable TFA intake per day through different oils in Indian population were found to be less than WHO recommendation. However Punjab having highest consumption of HVO (-15 g/d) showed 1.09-fold higher TFA intake than the WHO recommendation, which is alarming and may be one of the factors for high cardiovascular disease mortality rate that needs further elucidation. Thus there is a need to prescribe TFA limit for edible oil, butter, and butter oil in India and to reduce the already proposed TFA levels in HVO to safeguard the health of consumers. The probable daily intake of trans-fatty acid (TFA) especially through hydrogenated vegetable oil (HVO) was assessed. In absence of any specification for TFA and fatty acid composition for edible oils, butter, and butter samples, a pressing need was felt to prescribe TFA limit in India. The study indicates that TFA intake through HVO consumption is higher in States like Punjab than the recommended daily intake prescribed by WHO. Hence, strategies should be adopted to either decrease the consumption of HVO or to modify the industrial processing method of HVO with less content of TFA to safeguard the health of consumers. © 2012 Institute of Food Technologists®
Influence of level of education on disability free life expectancy by sex: the ILSA study.
Minicuci, N; Noale, M
2005-12-01
To assess the effect of education on Disability Free Life Expectancy among older Italians, using a hierarchical model as indicator of disability, with estimates based on the multistate life table method and IMaCh software. Data were obtained from the Italian Longitudinal Study on Aging which considered a random sample of 5632 individuals. Total life expectancy ranged from 16.5 years for men aged 65 years to 6 years for men aged 80. The age range for women was 19.6 and 8.4 years, respectively. For both sexes, increasing age was associated with a lower probability of recovery from a mild state of disability, with a greater probability of worsening for all individuals presenting an independent state at baseline, and with a greater probability of dying except for women from a mild state of disability. A medium/high educational level was associated with a greater probability of recovery only in men with a mild state of disability at baseline, and with a lower probability of worsening in both sexes, except for men with a mild state of disability at baseline. The positive effects of high education are well established in most research work and, being a modifiable factor, strategies focused on increasing level of education and, hence strengthening access to information and use of health services would produce significant benefits.
Oyeflaten, Irene; Lie, Stein Atle; Ihlebæk, Camilla M; Eriksen, Hege R
2012-09-06
Return to work (RTW) after long-term sick leave can be a long-lasting process where the individual may shift between work and receiving different social security benefits, as well as between part-time and full-time work. This is a challenge in the assessment of RTW outcomes after rehabilitation interventions. The aim of this study was to analyse the probability for RTW, and the probabilities of transitions between different benefits during a 4-year follow-up, after participating in a work-related rehabilitation program. The sample consisted of 584 patients (66% females), mean age 44 years (sd = 9.3). Mean duration on various types of sick leave benefits at entry to the rehabilitation program was 9.3 months (sd = 3.4)]. The patients had mental (47%), musculoskeletal (46%), or other diagnoses (7%). Official national register data over a 4-year follow-up period was analysed. Extended statistical tools for multistate models were used to calculate transition probabilities between the following eight states; working, partial sick leave, full-time sick leave, medical rehabilitation, vocational rehabilitation, and disability pension; (partial, permanent and time-limited). During the follow-up there was an increased probability for working, a decreased probability for being on sick leave, and an increased probability for being on disability pension. The probability of RTW was not related to the work and benefit status at departure from the rehabilitation clinic. The patients had an average of 3.7 (range 0-18) transitions between work and the different benefits. The process of RTW or of receiving disability pension was complex, and may take several years, with multiple transitions between work and different benefits. Access to reliable register data and the use of a multistate RTW model, makes it possible to describe the developmental nature and the different levels of the recovery and disability process.
2012-01-01
Background Return to work (RTW) after long-term sick leave can be a long-lasting process where the individual may shift between work and receiving different social security benefits, as well as between part-time and full-time work. This is a challenge in the assessment of RTW outcomes after rehabilitation interventions. The aim of this study was to analyse the probability for RTW, and the probabilities of transitions between different benefits during a 4-year follow-up, after participating in a work-related rehabilitation program. Methods The sample consisted of 584 patients (66% females), mean age 44 years (sd = 9.3). Mean duration on various types of sick leave benefits at entry to the rehabilitation program was 9.3 months (sd = 3.4)]. The patients had mental (47%), musculoskeletal (46%), or other diagnoses (7%). Official national register data over a 4-year follow-up period was analysed. Extended statistical tools for multistate models were used to calculate transition probabilities between the following eight states; working, partial sick leave, full-time sick leave, medical rehabilitation, vocational rehabilitation, and disability pension; (partial, permanent and time-limited). Results During the follow-up there was an increased probability for working, a decreased probability for being on sick leave, and an increased probability for being on disability pension. The probability of RTW was not related to the work and benefit status at departure from the rehabilitation clinic. The patients had an average of 3.7 (range 0–18) transitions between work and the different benefits. Conclusions The process of RTW or of receiving disability pension was complex, and may take several years, with multiple transitions between work and different benefits. Access to reliable register data and the use of a multistate RTW model, makes it possible to describe the developmental nature and the different levels of the recovery and disability process. PMID:22954254
Specific cationic emission of cisplatin following ionization by swift protons
NASA Astrophysics Data System (ADS)
Moretto-Capelle, Patrick; Champeaux, Jean-Philippe; Deville, Charlotte; Sence, Martine; Cafarelli, Pierre
2016-05-01
We have investigated collision-induced ionization and fragmentation by 100 keV protons of the radio sensitizing molecule cisplatin, which is used in cancer treatments. A large emission of HCl+ and NH2+ is observed, but surprisingly, no cationic fragments containing platinum are detected, in contrast to ionization-dissociation induced by electronic collision. Theoretical investigations show that the ionization processes take place on platinum and on chlorine atoms. We propose new ionization potentials for cisplatin. Dissociation limits corresponding to the measured fragmentation mass spectrum have been evaluated and the theoretical results show that the non-observed cationic fragments containing platinum are mostly associated with low dissociation energies. We have also investigated the reaction path for the hydrogen transfer from the NH3 group to the Cl atom, as well as the corresponding dissociation limits from this tautomeric form. Here again the cations containing platinum correspond to lower dissociation limits. Thus, the experimental results suggest that excited states, probably formed via inner-shell ionization of the platinum atom of the molecule, correlated to higher dissociation limits are favored.
Cost-effectiveness analysis of lapatinib in HER-2-positive advanced breast cancer.
Le, Quang A; Hay, Joel W
2009-02-01
A recent clinical trial demonstrated that the addition of lapatinib to capecitabine in the treatment of HER-2-positive advanced breast cancer (ABC) significantly increases median time to progression. The objective of the current analysis was to assess the cost-effectiveness of this therapy from the US societal perspective. A Markov model comprising 4 health states (stable disease, respond-to-therapy, disease progression, and death) was developed to estimate the projected-lifetime clinical and economic implications of this therapy. The model used Monte Carlo simulation to imitate the clinical course of a typical patient with ABC and updated with response rates and major adverse effects. Transition probabilities were estimated based on the results from the EGF100151 and EGF20002 clinical trials of lapatinib. Health state utilities, direct and indirect costs of the therapy, major adverse events, laboratory tests, and costs of disease progression were obtained from published sources. The model used a 3% discount rate and reported in 2007 US dollars. Over a lifetime, the addition of lapatinib to capecitabine as combination therapy was estimated to cost an additional $19,630, with an expected gain of 0.12 quality-adjusted life years (QALY) or an incremental cost-effectiveness ratio (ICER) of $166,113 per QALY gained. The 95% confidence limits of the ICER ranged from $158,000 to $215,000/QALY. A cost-effectiveness acceptability curve indicated less than 1% probability that the ICER would be lower than $100,000/QALY. Compared with commonly accepted willingness-to-pay thresholds in oncology treatment, the addition of lapatinib to capecitabine is not clearly cost-effective; and most likely to result in an ICER somewhat higher than the societal willingness-to-pay threshold limits. (c) 2008 American Cancer Society.
Diederich, Adele
2008-02-01
Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.
Risk and Vulnerability Analysis of Satellites Due to MM/SD with PIRAT
NASA Astrophysics Data System (ADS)
Kempf, Scott; Schafer, Frank Rudolph, Martin; Welty, Nathan; Donath, Therese; Destefanis, Roberto; Grassi, Lilith; Janovsky, Rolf; Evans, Leanne; Winterboer, Arne
2013-08-01
Until recently, the state-of-the-art assessment of the threat posed to spacecraft by micrometeoroids and space debris was limited to the application of ballistic limit equations to the outer hull of a spacecraft. The probability of no penetration (PNP) is acceptable for assessing the risk and vulnerability of manned space mission, however, for unmanned missions, whereby penetrations of the spacecraft exterior do not necessarily constitute satellite or mission failure, these values are overly conservative. The newly developed software tool PIRAT (Particle Impact Risk and Vulnerability Analysis Tool) has been developed based on the Schäfer-Ryan-Lambert (SRL) triple-wall ballistic limit equation (BLE), applicable for various satellite components. As a result, it has become possible to assess the individual failure rates of satellite components. This paper demonstrates the modeling of an example satellite, the performance of a PIRAT analysis and the potential for subsequent design optimizations with respect of micrometeoroid and space debris (MM/SD) impact risk.
NASA Technical Reports Server (NTRS)
Stupl, Jan Michael; Faber, Nicolas; Foster, Cyrus; Yang Yang, Fan; Levit, Creon
2013-01-01
The potential to perturb debris orbits using photon pressure from ground-based lasers has been confirmed by independent research teams. Two useful applications of this scheme are protecting space assets from impacts with debris and stabilizing the orbital debris environment, both relying on collision avoidance rather than de-orbiting debris. This paper presents the results of a new assessment method to analyze the efficiency of the concept for collision avoidance. Earlier research concluded that one ground based system consisting of a 10 kW class laser, directed by a 1.5 m telescope with adaptive optics, can prevent a significant fraction of debris-debris collisions in low Earth orbit. That research used in-track displacement to measure efficiency and restricted itself to an analysis of a limited number of objects. As orbit prediction error is dependent on debris object properties, a static displacement threshold should be complemented with another measure to assess the efficiency of the scheme. In this paper we present the results of an approach using probability of collision. Using a least-squares fitting method, we improve the quality of the original TLE catalogue in terms of state and co-state accuracy. We then calculate collision probabilities for all the objects in the catalogue. The conjunctions with the highest risk of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the collision probability in a 20 minute window around the original conjunction. We then use different criteria to evaluate the utility of the laser-based collision avoidance scheme and assess the number of base-line ground stations needed to mitigate a significant number of high probability conjunctions. Finally, we also give an account how a laser ground station can be used for both orbit deflection and debris tracking.
Limited family structure and BRCA gene mutation status in single cases of breast cancer.
Weitzel, Jeffrey N; Lagos, Veronica I; Cullinane, Carey A; Gambol, Patricia J; Culver, Julie O; Blazer, Kathleen R; Palomares, Melanie R; Lowstuter, Katrina J; MacDonald, Deborah J
2007-06-20
An autosomal dominant pattern of hereditary breast cancer may be masked by small family size or transmission through males given sex-limited expression. To determine if BRCA gene mutations are more prevalent among single cases of early onset breast cancer in families with limited vs adequate family structure than would be predicted by currently available probability models. A total of 1543 women seen at US high-risk clinics for genetic cancer risk assessment and BRCA gene testing were enrolled in a prospective registry study between April 1997 and February 2007. Three hundred six of these women had breast cancer before age 50 years and no first- or second-degree relatives with breast or ovarian cancers. The main outcome measure was whether family structure, assessed from multigenerational pedigrees, predicts BRCA gene mutation status. Limited family structure was defined as fewer than 2 first- or second-degree female relatives surviving beyond age 45 years in either lineage. Family structure effect and mutation probability by the Couch, Myriad, and BRCAPRO models were assessed with stepwise multiple logistic regression. Model sensitivity and specificity were determined and receiver operating characteristic curves were generated. Family structure was limited in 153 cases (50%). BRCA gene mutations were detected in 13.7% of participants with limited vs 5.2% with adequate family structure. Family structure was a significant predictor of mutation status (odds ratio, 2.8; 95% confidence interval, 1.19-6.73; P = .02). Although none of the models performed well, receiver operating characteristic analysis indicated that modification of BRCAPRO output by a corrective probability index accounting for family structure was the most accurate BRCA gene mutation status predictor (area under the curve, 0.72; 95% confidence interval, 0.63-0.81; P<.001) for single cases of breast cancer. Family structure can affect the accuracy of mutation probability models. Genetic testing guidelines may need to be more inclusive for single cases of breast cancer when the family structure is limited and probability models need to be recreated using limited family history as an actual variable.
van der Wel, Kjetil A; Dahl, Espen; Thielen, Karsten
2012-01-01
In comparative studies of health inequalities, public health researchers have usually studied only disease and illness. Recent studies have also examined the sickness dimension of health, that is, the extent to which ill health is accompanied by joblessness, and how this association varies by education within different welfare contexts. This research has used either a limited number of countries or quantitative welfare state measures in studies of many countries. In this study, the authors expand on this knowledge by investigating whether a regime approach to the welfare state produces consistent results. They analyze data from the European Union Statistics on Income and Living Conditions (EU-SILC); health was measured by limiting longstanding illness (LLSI). Results show that for both men and women reporting LLSI in combination with low educational level, the probabilities of non-employment were particularly high in the Anglo-Saxon and Eastern welfare regimes, and lowest in the Scandinavian regime. For men, absolute and relative social inequalities in sickness were lowest in the Southern regime; for women, inequalities were lowest in the Scandinavian regime. The authors conclude that the Scandinavian welfare regime is more able than other regimes to protect against non-employment in the face of illness, especially for individuals with low educational level.
Landau-Zener transitions and Dykhne formula in a simple continuum model
NASA Astrophysics Data System (ADS)
Dunham, Yujin; Garmon, Savannah
The Landau-Zener model describing the interaction between two linearly driven discrete levels is useful in describing many simple dynamical systems; however, no system is completely isolated from the surrounding environment. Here we examine a generalizations of the original Landau-Zener model to study simple environmental influences. We consider a model in which one of the discrete levels is replaced with a energy continuum, in which we find that the survival probability for the initially occupied diabatic level is unaffected by the presence of the continuum. This result can be predicted by assuming that each step in the evolution for the diabatic state evolves independently according to the Landau-Zener formula, even in the continuum limit. We also show that, at least for the simplest model, this result can also be predicted with the natural generalization of the Dykhne formula for open systems. We also observe dissipation as the non-escape probability from the discrete levels is no longer equal to one.
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
Lizárraga-Mendiola, L; González-Sandoval, M R; Durán-Domínguez, M C; Márquez-Herrera, C
2009-08-01
The geochemical behavior of zinc, lead and copper from sulfidic tailings in a mine site with potential to generate acidic drainage (pyrite (55%) and sphalerite (2%)) is reported in this paper. The mining area is divided in two zones, considering the topographic location of sampling points with respect to the tailings pile: (a) outer zone, out of the probable influence of acid mine drainage (AMD) pollution, and (b) inner zone, probably influenced by AMD pollution. Maximum total ions concentrations (mg/L) measured in superficial waters found were, in the outer zone: As (0.2), Cd (0.9), Fe (19), Mn (39), Pb (5.02), SO4(2-) (4650), Zn (107.67), and in the inner zone are As (0.1), Cd (0.2), Fe (88), Mn (13), Pb (6), SO4(2-) (4,880), Zn (46). The presence of these ions that exceeding the permissible maximum limits for human consume, could be associated to tailings mineralogy and acid leachates generated in tailings pile.
Quantum-tunneling isotope-exchange reaction H2+D-→HD +H-
NASA Astrophysics Data System (ADS)
Yuen, Chi Hong; Ayouz, Mehdi; Endres, Eric S.; Lakhamanskaya, Olga; Wester, Roland; Kokoouline, Viatcheslav
2018-02-01
The tunneling reaction H2+D-→HD +H- was studied in a recent experimental work at low temperatures (10, 19, and 23 K) by Endres et al. [Phys. Rev. A 95, 022706 (2017), 10.1103/PhysRevA.95.022706]. An upper limit of the rate coefficient was found to be about 10-18cm3 /s. In the present study, reaction probabilities are determined using the ABC program developed by Skouteris et al. [Comput. Phys. Commun. 133, 128 (2000), 10.1016/S0010-4655(00)00167-3]. The probabilities for ortho-H2 and para-H2 in their ground rovibrational states are obtained numerically at collision energies above 50 meV with the total angular momentum J =0 -15 and extrapolated below 50 meV using a WKB approach. Thermally averaged rate coefficients for ortho- and para-H2 are obtained; the largest one, for ortho-H2, is about 3.1 ×10-20cm3 /s, which agrees with the experimental results.
Immunomodulation of Tumor Growth
Prehn, Richmond T.
1974-01-01
Most and perhaps all neoplasms arouse an immune response in their hosts. Unfortunately, this response is seldom effective in limiting tumor growth. Immunologic surveillance, as originally conceived, probably does not exist. The early weak response to nascent tumors stimulates rather than inhibits their growth. A truly tumor-limiting reaction occurs only in exceptional tumor systems, and then it is relatively late and ineffectual. Immunity may be of great importance in limiting the activity of oncogenic viruses, but is probably seldom the determiner of whether or not an already transformed cell gives rise to a lethal cancer. PMID:4548632
Continuous-Time Classical and Quantum Random Walk on Direct Product of Cayley Graphs
NASA Astrophysics Data System (ADS)
Salimi, S.; Jafarizadeh, M. A.
2009-06-01
In this paper we define direct product of graphs and give a recipe for obtaining probability of observing particle on vertices in the continuous-time classical and quantum random walk. In the recipe, the probability of observing particle on direct product of graph is obtained by multiplication of probability on the corresponding to sub-graphs, where this method is useful to determining probability of walk on complicated graphs. Using this method, we calculate the probability of continuous-time classical and quantum random walks on many of finite direct product Cayley graphs (complete cycle, complete Kn, charter and n-cube). Also, we inquire that the classical state the stationary uniform distribution is reached as t → ∞ but for quantum state is not always satisfied.
Stabilizing multicellularity through ratcheting
Libby, Eric; Conlin, Peter L.; Kerr, Ben; Ratcliff, William C.
2016-01-01
The evolutionary transition to multicellularity probably began with the formation of simple undifferentiated cellular groups. Such groups evolve readily in diverse lineages of extant unicellular taxa, suggesting that there are few genetic barriers to this first key step. This may act as a double-edged sword: labile transitions between unicellular and multicellular states may facilitate the evolution of simple multicellularity, but reversion to a unicellular state may inhibit the evolution of increased complexity. In this paper, we examine how multicellular adaptations can act as evolutionary ‘ratchets’, limiting the potential for reversion to unicellularity. We consider a nascent multicellular lineage growing in an environment that varies between favouring multicellularity and favouring unicellularity. The first type of ratcheting mutations increase cell-level fitness in a multicellular context but are costly in a single-celled context, reducing the fitness of revertants. The second type of ratcheting mutations directly decrease the probability that a mutation will result in reversion (either as a pleiotropic consequence or via direct modification of switch rates). We show that both types of ratcheting mutations act to stabilize the multicellular state. We also identify synergistic effects between the two types of ratcheting mutations in which the presence of one creates the selective conditions favouring the other. Ratcheting mutations may play a key role in diverse evolutionary transitions in individuality, sustaining selection on the new higher-level organism by constraining evolutionary reversion. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431522
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By emphasizing and modelling detection probabilities, we now know: (i) that eradication of this species by searching is possible, (ii) how much searching effort would be required, (iii) under what environmental conditions searching would be most efficient, and (iv) several factors that are likely to modulate this quantification when searching is applied to new areas. The same approach can be use for evaluation of any control technology or population monitoring programme. ?? 2009 The Authors. Journal compilation ?? 2009 British Ecological Society.
Lost in search: (Mal-)adaptation to probabilistic decision environments in children and adults.
Betsch, Tilmann; Lehmann, Anne; Lindow, Stefanie; Lang, Anna; Schoemann, Martin
2016-02-01
Adaptive decision making in probabilistic environments requires individuals to use probabilities as weights in predecisional information searches and/or when making subsequent choices. Within a child-friendly computerized environment (Mousekids), we tracked 205 children's (105 children 5-6 years of age and 100 children 9-10 years of age) and 103 adults' (age range: 21-22 years) search behaviors and decisions under different probability dispersions (.17; .33, .83 vs. .50, .67, .83) and constraint conditions (instructions to limit search: yes vs. no). All age groups limited their depth of search when instructed to do so and when probability dispersion was high (range: .17-.83). Unlike adults, children failed to use probabilities as weights for their searches, which were largely not systematic. When examining choices, however, elementary school children (unlike preschoolers) systematically used probabilities as weights in their decisions. This suggests that an intuitive understanding of probabilities and the capacity to use them as weights during integration is not a sufficient condition for applying simple selective search strategies that place one's focus on weight distributions. PsycINFO Database Record (c) 2016 APA, all rights reserved.
Probability theory for 3-layer remote sensing radiative transfer model: univariate case.
Ben-David, Avishai; Davidson, Charles E
2012-04-23
A probability model for a 3-layer radiative transfer model (foreground layer, cloud layer, background layer, and an external source at the end of line of sight) has been developed. The 3-layer model is fundamentally important as the primary physical model in passive infrared remote sensing. The probability model is described by the Johnson family of distributions that are used as a fit for theoretically computed moments of the radiative transfer model. From the Johnson family we use the SU distribution that can address a wide range of skewness and kurtosis values (in addition to addressing the first two moments, mean and variance). In the limit, SU can also describe lognormal and normal distributions. With the probability model one can evaluate the potential for detecting a target (vapor cloud layer), the probability of observing thermal contrast, and evaluate performance (receiver operating characteristics curves) in clutter-noise limited scenarios. This is (to our knowledge) the first probability model for the 3-layer remote sensing geometry that treats all parameters as random variables and includes higher-order statistics. © 2012 Optical Society of America
Communication: Reactivity borrowing in the mode selective chemistry of H + CHD3 → H2 + CD3
NASA Astrophysics Data System (ADS)
Ellerbrock, Roman; Manthe, Uwe
2017-12-01
Quantum state-resolved reaction probabilities for the H + CHD3 → H2 + CD3 reaction are calculated by accurate full-dimensional quantum dynamics calculations using the multi-layer multi-configurational time-dependent Hartree approach and the quantum transition state concept. Reaction probabilities of various ro-vibrational states of the CHD3 reactant are investigated for vanishing total angular momentum. While the reactivity of the different vibrational states of CHD3 mostly follows intuitive patterns, an unusually large reaction probability is found for CHD3 molecules triply excited in the CD3 umbrella-bending vibration. This surprising reactivity can be explained by a Fermi resonance-type mixing of the single CH-stretch excited and the triple CD3 umbrella-bend excited vibrational states of CHD3. These findings show that resonant energy transfer can significantly affect the mode-selective chemistry of CHD3 and result in counter-intuitive reactivity patterns.
Serrano-Alarcón, Manuel; Perelman, Julian
2017-10-03
In a context of population ageing, it is a priority for planning and prevention to understand the socioeconomic (SE) patterning of functional limitations and its consequences on healthcare needs. This paper aims at measuring the gender and SE inequalities in functional limitations and their age of onset among the Southern European elderly; then, we evaluate how functional status is linked to formal and informal care use. We used Portuguese, Italian and Spanish data from the Survey of Health, Ageing and Retirement in Europe (SHARE) of 2011 (n = 9233). We constructed a summary functional limitation score as the sum of two variables: i) Activities of Daily Living (ADL) and ii) Instrumental Activities of Daily Living (IADL). We modelled the functional limitation as a function of age, gender, education, subjective poverty, employment and marital status using multinomial logit models. We then estimated how functional limitation affected informal and formal care demand using negative binomial and logistic models. Women were 2.3 percentage points (pp) more likely to experience severe functional limitation than men, and overcame a 10% probability threshold of suffering from severe limitation around 5 years earlier. Subjective poverty was associated with a 3.1 pp. higher probability of severe functional limitation. Having a university degree reduced the probability of severe functional limitation by 3.5 pp. as compared to none educational level. Discrepancies were wider for the oldest old: women aged 65-79 years old were 3.3 pp. more likely to suffer severe limitations, the excess risk increasing to 15.5 pp. among those older than 80. Similarly, educational inequalities in functional limitation were wider at older ages. Being severely limited was related with a 32.1 pp. higher probability of receiving any informal care, as compared to those moderately limited. Finally, those severely limited had on average 3.2 hospitalization days and 4.6 doctor consultations more, per year, than those without limitations. Functional limitations are unequally distributed, hitting women and the worse-off earlier and more severely, with consequences on care needs. Considering the burden on healthcare systems and families, public health policies should seek to reduce current inequalities in functional limitations.
Lepak, Jesse M.; Hooten, Mevin B.; Eagles-Smith, Collin A.; Tate, Michael T.; Lutz, Michelle A.; Ackerman, Joshua T.; Willacker, James J.; Jackson, Allyson K.; Evers, David C.; Wiener, James G.; Pritz, Colleen Flanagan; Davis, Jay
2016-01-01
Fish represent high quality protein and nutrient sources, but Hg contamination is ubiquitous in aquatic ecosystems and can pose health risks to fish and their consumers. Potential health risks posed to fish and humans by Hg contamination in fish were assessed in western Canada and the United States. A large compilation of inland fish Hg concentrations was evaluated in terms of potential health risk to the fish themselves, health risk to predatory fish that consume Hg contaminated fish, and to humans that consume Hg contaminated fish. The probability that a fish collected from a given location would exceed a Hg concentration benchmark relevant to a health risk was calculated. These exceedance probabilities and their associated uncertainties were characterized for fish of multiple size classes at multiple health-relevant benchmarks. The approach was novel and allowed for the assessment of the potential for deleterious health effects in fish and humans associated with Hg contamination in fish across this broad study area. Exceedance probabilities were relatively common at low Hg concentration benchmarks, particularly for fish in larger size classes. Specifically, median exceedances for the largest size classes of fish evaluated at the lowest Hg concentration benchmarks were 0.73 (potential health risks to fish themselves), 0.90 (potential health risk to predatory fish that consume Hg contaminated fish), and 0.97 (potential for restricted fish consumption by humans), but diminished to essentially zero at the highest benchmarks and smallest fish size classes. Exceedances of benchmarks are likely to have deleterious health effects on fish and limit recommended amounts of fish humans consume in western Canada and the United States. Results presented here are not intended to subvert or replace local fish Hg data or consumption advice, but provide a basis for identifying areas of potential health risk and developing more focused future research and monitoring efforts.
Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth
2007-05-21
The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter
2016-06-01
Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Zhong-Xiao, E-mail: zxman@mail.qfnu.edu.cn; An, Nguyen Ba, E-mail: nban@iop.vast.ac.vn; Xia, Yun-Jie, E-mail: yjxia@mail.qfnu.edu.cn
In combination with the theories of open system and quantum recovering measurement, we propose a quantum state transfer scheme using spin chains by performing two sequential operations: a projective measurement on the spins of ‘environment’ followed by suitably designed quantum recovering measurements on the spins of interest. The scheme allows perfect transfer of arbitrary multispin states through multiple parallel spin chains with finite probability. Our scheme is universal in the sense that it is state-independent and applicable to any model possessing spin–spin interactions. We also present possible methods to implement the required measurements taking into account the current experimental technologies.more » As applications, we consider two typical models for which the probabilities of perfect state transfer are found to be reasonably high at optimally chosen moments during the time evolution. - Highlights: • Scheme that can achieve perfect quantum state transfer is devised. • The scheme is state-independent and applicable to any spin-interaction models. • The scheme allows perfect transfer of arbitrary multispin states. • Applications to two typical models are considered in detail.« less
Homodyning and heterodyning the quantum phase
NASA Technical Reports Server (NTRS)
Dariano, Giacomo M.; Macchiavello, C.; Paris, M. G. A.
1994-01-01
The double-homodyne and the heterodyne detection schemes for phase shifts between two synchronous modes of the electromagnetic field are analyzed in the framework of quantum estimation theory. The probability operator-valued measures (POM's) of the detectors are evaluated and compared with the ideal one in the limit of strong local reference oscillator. The present operational approach leads to a reasonable definition of phase measurement, whose sensitivity is actually related to the output r.m.s. noise of the photodetector. We emphasize that the simple-homodyne scheme does not correspond to a proper phase-shift measurements as it is just a zero-point detector. The sensitivity of all detection schemes are optimized at fixed energy with respect to the input state of radiation. It is shown that the optimal sensitivity can be actually achieved using suited squeezed states.
Study of nonequilibrium work distributions from a fluctuating lattice Boltzmann model.
Nasarayya Chari, S Siva; Murthy, K P N; Inguva, Ramarao
2012-04-01
A system of ideal gas is switched from an initial equilibrium state to a final state not necessarily in equilibrium, by varying a macroscopic control variable according to a well-defined protocol. The distribution of work performed during the switching process is obtained. The equilibrium free energy difference, ΔF, is determined from the work fluctuation relation. Some of the work values in the ensemble shall be less than ΔF. We term these as ones that "violate" the second law of thermodynamics. A fluctuating lattice Boltzmann model has been employed to carry out the simulation of the switching experiment. Our results show that the probability of violation of the second law increases with the increase of switching time (τ) and tends to one-half in the reversible limit of τ→∞.
NASA Astrophysics Data System (ADS)
Naine, Tarun Bharath; Gundawar, Manoj Kumar
2017-09-01
We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.
Nash Equilibrium of Social-Learning Agents in a Restless Multiarmed Bandit Game.
Nakayama, Kazuaki; Hisakado, Masato; Mori, Shintaro
2017-05-16
We study a simple model for social-learning agents in a restless multiarmed bandit (rMAB). The bandit has one good arm that changes to a bad one with a certain probability. Each agent stochastically selects one of the two methods, random search (individual learning) or copying information from other agents (social learning), using which he/she seeks the good arm. Fitness of an agent is the probability to know the good arm in the steady state of the agent system. In this model, we explicitly construct the unique Nash equilibrium state and show that the corresponding strategy for each agent is an evolutionarily stable strategy (ESS) in the sense of Thomas. It is shown that the fitness of an agent with ESS is superior to that of an asocial learner when the success probability of social learning is greater than a threshold determined from the probability of success of individual learning, the probability of change of state of the rMAB, and the number of agents. The ESS Nash equilibrium is a solution to Rogers' paradox.
Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane
2011-12-01
The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.
Rodriguez-Sanchez, B; Alessie, R J M; Feenstra, T L; Angelini, V
2018-06-01
To assess the impact of diabetes and diabetes-related complications on two measures of productivity for people in the labour force and out of it, namely "being afraid health limits ability to work before retirement" and "volunteering". Logistic regressions were run to test the impact of diabetes and its complications on the probability of being afraid health limits work and being a formal volunteer. The longitudinal sample for the former outcome includes 53,631 observations, clustered in 34,393 individuals, aged 50-65 years old whereas the latter consists of 45,384 observations, grouped in 29,104 individuals aged 65 and above across twelve European countries taken from the Survey of Health, Ageing and Retirement in Europe, from 2006 to 2013. Diabetes increased the probability of being afraid health limited work by nearly 11% points, adjusted by clinical complications, and reduced the likelihood of being a formal volunteer by 2.7% points, additionally adjusted by mobility problems. We also found that both the probability of being afraid health limits work and the probability of being a formal volunteer increased during and after the crisis. Moreover, having diabetes had a larger effect on being afraid health limits work during the year 2010, possibly related to the financial crisis. Our findings show that diabetes significantly affects the perception of people regarding the effects of their condition on work, increasing the fear that health limits their ability to work, especially during the crisis year 2010, as well as the participation in volunteering work among retired people.
Decay modes of the Hoyle state in 12C
NASA Astrophysics Data System (ADS)
Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.
2018-04-01
Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.
Capitani, Paolo; Cerri, Matteo; Amici, Roberto; Baracchi, Francesca; Jones, Christine Ann; Luppi, Marco; Perez, Emanuele; Parmeggiani, Pier Luigi; Zamboni, Giovanni
A shift of physiological regulations from a homeostatic to a non-homeostatic modality characterizes the passage from non-NREM sleep (NREMS) to REM sleep (REMS). In the rat, an EEG index which allows the automatic scoring of transitions from NREMS to REMS has been proposed: the NREMS to REMS transition indicator value, NIV [J.H. Benington et al., Sleep 17 (1994) 28-36]. However, such transitions are not always followed by a REMS episode, but are often followed by an awakening. In the present study, the relationship between changes in EEG activity and hypothalamic temperature (Thy), taken as an index of autonomic activity, was studied within a window consisting of the 60s which precedes a state change from a consolidated NREMS episode. Furthermore, the probability that a transition would lead to REMS or wake was analysed. The results showed that, within this time window, both a modified NIV (NIV(60)) and the difference between Thy at the limits of the window (Thy(D)) were related to the probability of REMS onset. Both the relationship between the indices and the probability of REMS onset was sigmoid, the latter of which saturated at a probability level around 50-60%. The efficacy for the prediction of successful transitions from NREMS to REMS found using Thy(D) as an index supports the view that such a transition is a dynamic process where the physiological risk to enter REMS is weighted at a central level.
A theory of stationarity and asymptotic approach in dissipative systems
NASA Astrophysics Data System (ADS)
Rubel, Michael Thomas
2007-05-01
The approximate dynamics of many physical phenomena, including turbulence, can be represented by dissipative systems of ordinary differential equations. One often turns to numerical integration to solve them. There is an incompatibility, however, between the answers it can produce (i.e., specific solution trajectories) and the questions one might wish to ask (e.g., what behavior would be typical in the laboratory?) To determine its outcome, numerical integration requires more detailed initial conditions than a laboratory could normally provide. In place of initial conditions, experiments stipulate how tests should be carried out: only under statistically stationary conditions, for example, or only during asymptotic approach to a final state. Stipulations such as these, rather than initial conditions, are what determine outcomes in the laboratory.This theoretical study examines whether the points of view can be reconciled: What is the relationship between one's statistical stipulations for how an experiment should be carried out--stationarity or asymptotic approach--and the expected results? How might those results be determined without invoking initial conditions explicitly?To answer these questions, stationarity and asymptotic approach conditions are analyzed in detail. Each condition is treated as a statistical constraint on the system--a restriction on the probability density of states that might be occupied when measurements take place. For stationarity, this reasoning leads to a singular, invariant probability density which is already familiar from dynamical systems theory. For asymptotic approach, it leads to a new, more regular probability density field. A conjecture regarding what appears to be a limit relationship between the two densities is presented.By making use of the new probability densities, one can derive output statistics directly, avoiding the need to create or manipulate initial data, and thereby avoiding the conceptual incompatibility mentioned above. This approach also provides a clean way to derive reduced-order models, complete with local and global error estimates, as well as a way to compare existing reduced-order models objectively.The new approach is explored in the context of five separate test problems: a trivial one-dimensional linear system, a damped unforced linear oscillator in two dimensions, the isothermal Rayleigh-Plesset equation, Lorenz's equations, and the Stokes limit of Burgers' equation in one space dimension. In each case, various output statistics are deduced without recourse to initial conditions. Further, reduced-order models are constructed for asymptotic approach of the damped unforced linear oscillator, the isothermal Rayleigh-Plesset system, and Lorenz's equations, and for stationarity of Lorenz's equations.
Propensity, Probability, and Quantum Theory
NASA Astrophysics Data System (ADS)
Ballentine, Leslie E.
2016-08-01
Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.
Individual-tree probability of survival model for the Northeastern United States
Richard M. Teck; Donald E. Hilt
1990-01-01
Describes a distance-independent individual-free probability of survival model for the Northeastern United States. Survival is predicted using a sixparameter logistic function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for variability in annual survival due to species, tree size, site quality, and the tree...
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, J.; Bowman, J.M.
1996-05-01
The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less
Average fidelity between random quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5
2005-03-01
We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mumpower, J.L.
There are strong structural similarities between risks from technological hazards and big-purse state lottery games. Risks from technological hazards are often described as low-probability, high-consequence negative events. State lotteries could be equally well characterized as low-probability, high-consequence positive events. Typical communications about state lotteries provide a virtual strategic textbook for opponents of risky technologies. The same techniques can be used to sell lottery tickets or sell opposition to risky technologies. Eight basic principles are enumerated.
The Role of Attention in Conscious Recollection
De Brigard, Felipe
2012-01-01
Most research on the relationship between attention and consciousness has been limited to perception. However, perceptions are not the only kinds of mental contents of which we can be conscious. An important set of conscious states that has not received proper treatment within this discussion is that of memories. This paper reviews compelling evidence indicating that attention may be necessary, but probably not sufficient, for conscious recollection. However, it is argued that unlike the case of conscious perception, the kind of attention required during recollection is internal, as opposed to external, attention. As such, the surveyed empirical evidence is interpreted as suggesting that internal attention is necessary, but probably not sufficient, for conscious recollection. The paper begins by justifying the need for clear distinctions among different kinds of attention, and then emphasizes the difference between internal and external attention. Next, evidence from behavioral, neuropsychological, and neuroimaging studies suggesting that internal attention is required for the successful retrieval of memorial contents is reviewed. In turn, it is argued that internal attention during recollection is what makes us conscious of the contents of retrieved memories; further evidence in support of this claim is also provided. Finally, it is suggested that internal attention is probably not sufficient for conscious recollection. Open questions and possible avenues for future research are also mentioned. PMID:22363305
A smooth mixture of Tobits model for healthcare expenditure.
Keane, Michael; Stavrunova, Olena
2011-09-01
This paper develops a smooth mixture of Tobits (SMTobit) model for healthcare expenditure. The model is a generalization of the smoothly mixing regressions framework of Geweke and Keane (J Econometrics 2007; 138: 257-290) to the case of a Tobit-type limited dependent variable. A Markov chain Monte Carlo algorithm with data augmentation is developed to obtain the posterior distribution of model parameters. The model is applied to the US Medicare Current Beneficiary Survey data on total medical expenditure. The results suggest that the model can capture the overall shape of the expenditure distribution very well, and also provide a good fit to a number of characteristics of the conditional (on covariates) distribution of expenditure, such as the conditional mean, variance and probability of extreme outcomes, as well as the 50th, 90th, and 95th, percentiles. We find that healthier individuals face an expenditure distribution with lower mean, variance and probability of extreme outcomes, compared with their counterparts in a worse state of health. Males have an expenditure distribution with higher mean, variance and probability of an extreme outcome, compared with their female counterparts. The results also suggest that heart and cardiovascular diseases affect the expenditure of males more than that of females. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Perrin, Jérôme; Takeda, Yoshihiko; Hirano, Naoto; Takeuchi, Yoshiaki; Matsuda, Akihisa
1989-03-01
The deposition rate of hydrogenated amorphous silicon films in SiH 4 glow-discharge is drastically enhanced upon addition of B 2H 6 when the gas-phase concentration exceeds 10 -4. This cannot be attributed to gas-phase reactions and must be interpreted as an increase of the sticking probability of the dominant SiH 3 radical. However, the total surface loss probability ( β) of SiH 3 which includes both sticking ( s) and recombination ( γ) increases only above 10 -2 B 2H 6 concentration, which reveals that between 10 -4 and 10 -2 the ratio {s}/{β} increases. A precursor-state model is proposed in which SiH 3 first physisorbs on the H-covered surface and migrates until it recombines, or chemisorbs on a free dangling bond site. At a typical deposition temperature of 200° C, the only mechanism of creation of dangling bonds in the absence of B 2H 6 is precisely the recombination of SiH 3 as SiH 4 by H abstraction, which limits the sticking probability to a fraction of β. This restriction is overcome with the help of hydroboron radicals, presumably BH 3, which catalyze H 2 desorption.
Fixation probabilities of evolutionary coordination games on two coupled populations
NASA Astrophysics Data System (ADS)
Zhang, Liye; Ying, Limin; Zhou, Jie; Guan, Shuguang; Zou, Yong
2016-09-01
Evolutionary forces resulted from competitions between different populations are common, which change the evolutionary behavior of a single population. In an isolated population of coordination games of two strategies (e.g., s1 and s2), the previous studies focused on determining the fixation probability that the system is occupied by only one strategy (s1) and their expectation times, given an initial mixture of two strategies. In this work, we propose a model of two interdependent populations, disclosing the effects of the interaction strength on fixation probabilities. In the well-mixing limit, a detailed linear stability analysis is performed, which allows us to find and to classify the different equilibria, yielding a clear picture of the bifurcation patterns in phase space. We demonstrate that the interactions between populations crucially alter the dynamic behavior. More specifically, if the coupling strength is larger than some threshold value, the critical initial density of one strategy (s1) that corresponds to fixation is significantly delayed. Instead, the two populations evolve to the opposite state of all (s2) strategy, which are in favor of the red queen hypothesis. We delineate the extinction time of strategy (s1) explicitly, which is an exponential form. These results are validated by systematic numerical simulations.
Till Porn Do Us Part? A Longitudinal Examination of Pornography Use and Divorce.
Perry, Samuel L; Schleifer, Cyrus
2018-01-01
As pornography use becomes more commonplace in the United States, and increasingly so among younger cohorts, a growing literature is considering its potential connection to key social and cultural institutions. The current study examined the relationship between pornography use and one such institution: marriage. We drew on three-wave longitudinal data from 2006 to 2014 General Social Survey panel studies to determine whether married Americans' pornography use predicted their likelihood of divorce over time and under what social conditions. We employed a doubly robust strategy that combines entropy balancing with logistic regression models. We found that the probability of divorce roughly doubled for married Americans who began pornography use between survey waves (N = 2,120; odds ratio = 2.19), and that this relationship held for both women and men. Conversely, discontinuing pornography use between survey waves was associated with a lower probability of divorce, but only for women. Additional analyses also showed that the association between beginning pornography use and the probability of divorce was particularly strong among younger Americans, those who were less religious, and those who reported greater initial marital happiness. We conclude by discussing data limitations, considering potential intervening mechanisms and the possibility of reverse causation, and outlining implications for future research.
A mathematical model for evolution and SETI.
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Modelling Evolution and SETI Mathematically
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2012-05-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factor increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions constrained between the time axis and the exponential growth curve. Finally, since each lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
A Mathematical Model for Evolution and SETI
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor fl in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor fl is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
Optimizing one-shot learning with binary synapses.
Romani, Sandro; Amit, Daniel J; Amit, Yali
2008-08-01
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
The present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Pope, S. B.
1992-01-01
The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.
Design rules for quasi-linear nonlinear optical structures
NASA Astrophysics Data System (ADS)
Lytel, Richard; Mossman, Sean M.; Kuzyk, Mark G.
2015-09-01
The maximization of the intrinsic optical nonlinearities of quantum structures for ultrafast applications requires a spectrum scaling as the square of the energy eigenstate number or faster. This is a necessary condition for an intrinsic response approaching the fundamental limits. A second condition is a design generating eigenstates whose ground and lowest excited state probability densities are spatially separated to produce large differences in dipole moments while maintaining a reasonable spatial overlap to produce large off-diagonal transition moments. A structure whose design meets both conditions will necessarily have large first or second hyperpolarizabilities. These two conditions are fundamental heuristics for the design of any nonlinear optical structure.
The nuclear size and mass effects on muonic hydrogen-like atoms embedded in Debye plasma
NASA Astrophysics Data System (ADS)
Poszwa, A.; Bahar, M. K.; Soylu, A.
2016-10-01
Effects of finite nuclear size and finite nuclear mass are investigated for muonic atoms and muonic ions embedded in the Debye plasma. Both nuclear charge radii and nuclear masses are taken into account with experimentally determined values. In particular, isotope shifts of bound state energies, radial probability densities, transition energies, and binding energies for several atoms are studied as functions of Debye length. The theoretical model based on semianalytical calculations, the Sturmian expansion method, and the perturbative approach has been constructed, in the nonrelativistic frame. For some limiting cases, the comparison with previous most accurate literature results has been made.
Abazov, Victor Mukhamedovich
2011-10-11
We present a search for the pair production of first generation scalar leptoquarks (LQ) in data corresponding to an integrated luminosity of 5.4 fb -1 collected with the D0 detector at the Fermilab Tevatron Collider in pp collisions at √s = 1.96 TeV. In the channel LQLQ → eqν eq, where q,q are u or d quarks, no significant excess of data over background is observed, and we set a 95% C.L. lower limit of 326 GeV on the leptoquark mass, assuming equal probabilities of leptoquark decays to eq and ν eq.
Sexual differentiation in the distribution potential of northern jaguars (Panthera onca)
Boydston, Erin E.; Lopez Gonzalez, Carlos A.
2005-01-01
We estimated the potential geographic distribution of jaguars in the southwestern United States and northwestern Mexico by modeling the jaguar ecological niche from occurrence records. We modeled separately the distribution of males and females, assuming records of females probably represented established home ranges while male records likely included dispersal movements. The predicted distribution for males was larger than that for females. Eastern Sonora appeared capable for supporting male and female jaguars with potential range expansion into southeastern Arizona. New Mexico and Chihuahua contained environmental characteristics primarily limited to the male niche and thus may be areas into which males occasionally disperse.
Coulomb Impurity Problem of Graphene in Strong Coupling Regime in Magnetic Fields.
Kim, S C; Yang, S-R Eric
2015-10-01
We investigate the Coulomb impurity problem of graphene in strong coupling limit in the presence of magnetic fields. When the strength of the Coulomb potential is sufficiently strong the electron of the lowest energy boundstate of the n = 0 Landau level may fall to the center of the potential. To prevent this spurious effect the Coulomb potential must be regularized. The scaling function for the inverse probability density of this state at the center of the impurity potential is computed in the strong coupling regime. The dependence of the computed scaling function on the regularization parameter changes significantly as the strong coupling regime is approached.
Chen, Chunyi; Yang, Huamin
2016-08-22
The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.
Breininger, David R; Breininger, Robert D; Hall, Carlton R
2017-02-01
Seagrasses are the foundation of many coastal ecosystems and are in global decline because of anthropogenic impacts. For the Indian River Lagoon (Florida, U.S.A.), we developed competing multistate statistical models to quantify how environmental factors (surrounding land use, water depth, and time [year]) influenced the variability of seagrass state dynamics from 2003 to 2014 while accounting for time-specific detection probabilities that quantified our ability to determine seagrass state at particular locations and times. We classified seagrass states (presence or absence) at 764 points with geographic information system maps for years when seagrass maps were available and with aerial photographs when seagrass maps were not available. We used 4 categories (all conservation, mostly conservation, mostly urban, urban) to describe surrounding land use within sections of lagoonal waters, usually demarcated by land features that constricted these waters. The best models predicted that surrounding land use, depth, and year would affect transition and detection probabilities. Sections of the lagoon bordered by urban areas had the least stable seagrass beds and lowest detection probabilities, especially after a catastrophic seagrass die-off linked to an algal bloom. Sections of the lagoon bordered by conservation lands had the most stable seagrass beds, which supports watershed conservation efforts. Our results show that a multistate approach can empirically estimate state-transition probabilities as functions of environmental factors while accounting for state-dependent differences in seagrass detection probabilities as part of the overall statistical inference procedure. © 2016 Society for Conservation Biology.
Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
Simple scheme for encoding and decoding a qubit in unknown state for various topological codes
Łodyga, Justyna; Mazurek, Paweł; Grudka, Andrzej; Horodecki, Michał
2015-01-01
We present a scheme for encoding and decoding an unknown state for CSS codes, based on syndrome measurements. We illustrate our method by means of Kitaev toric code, defected-lattice code, topological subsystem code and 3D Haah code. The protocol is local whenever in a given code the crossings between the logical operators consist of next neighbour pairs, which holds for the above codes. For subsystem code we also present scheme in a noisy case, where we allow for bit and phase-flip errors on qubits as well as state preparation and syndrome measurement errors. Similar scheme can be built for two other codes. We show that the fidelity of the protected qubit in the noisy scenario in a large code size limit is of , where p is a probability of error on a single qubit per time step. Regarding Haah code we provide noiseless scheme, leaving the noisy case as an open problem. PMID:25754905
Inferring the post-merger gravitational wave emission from binary neutron star coalescences
NASA Astrophysics Data System (ADS)
Chatziioannou, Katerina; Clark, James Alexander; Bauswein, Andreas; Millhouse, Margaret; Littenberg, Tyson B.; Cornish, Neil
2017-12-01
We present a robust method to characterize the gravitational wave emission from the remnant of a neutron star coalescence. Our approach makes only minimal assumptions about the morphology of the signal and provides a full posterior probability distribution of the underlying waveform. We apply our method on simulated data from a network of advanced ground-based detectors and demonstrate the gravitational wave signal reconstruction. We study the reconstruction quality for different binary configurations and equations of state for the colliding neutron stars. We show how our method can be used to constrain the yet-uncertain equation of state of neutron star matter. The constraints on the equation of state we derive are complementary to measurements of the tidal deformation of the colliding neutron stars during the late inspiral phase. In the case of nondetection of a post-merger signal following a binary neutron star inspiral, we show that we can place upper limits on the energy emitted.
Robustness of multidimensional Brownian ratchets as directed transport mechanisms.
González-Candela, Ernesto; Romero-Rochín, Víctor; Del Río, Fernando
2011-08-07
Brownian ratchets have recently been considered as models to describe the ability of certain systems to locate very specific states in multidimensional configuration spaces. This directional process has particularly been proposed as an alternative explanation for the protein folding problem, in which the polypeptide is driven toward the native state by a multidimensional Brownian ratchet. Recognizing the relevance of robustness in biological systems, in this work we analyze such a property of Brownian ratchets by pushing to the limits all the properties considered essential to produce directed transport. Based on the results presented here, we can state that Brownian ratchets are able to deliver current and locate funnel structures under a wide range of conditions. As a result, they represent a simple model that solves the Levinthal's paradox with great robustness and flexibility and without requiring any ad hoc biased transition probability. The behavior of Brownian ratchets shown in this article considerably enhances the plausibility of the model for at least part of the structural mechanism behind protein folding process.
Spectral and Timing States in Black Hole Binaries
NASA Astrophysics Data System (ADS)
Wilms, J.
Results on the long term variability of galactic black hole candidates are reviewed. I mainly present the results of a > 2 year long campaign with RXTE to monitor the canonical soft state black hole candidates LMC X-1 and LMC X-3 using monthly observations. These observations are presented within the context of the RXTE-ASM long term quasi-periodic variability on timescales of about 150d. For LMC X-3, times of low ASM count rate are correlated with a significant hardening of the X-ray spectrum. The observation with the lowest flux during the whole monitoring campaign can be modeled with a simple γ=1.7 power law -- a hard state spectrum. Since these spectral hardenings occur on the 150 d timescale it is probable that they are associated with periodic changes in the accretion rate. Possible causes for this behavior are discussed, e.g. a wind driven limit-cycle or long-term variability of the donor star.
Optimal minimal measurements of mixed states
NASA Astrophysics Data System (ADS)
Vidal, G.; Latorre, J. I.; Pascual, P.; Tarrach, R.
1999-07-01
The optimal and minimal measuring strategy is obtained for a two-state system prepared in a mixed state with a probability given by any isotropic a priori distribution. We explicitly construct the specific optimal and minimal generalized measurements, which turn out to be independent of the a priori probability distribution, obtaining the best guesses for the unknown state as well as a closed expression for the maximal mean-average fidelity. We do this for up to three copies of the unknown state in a way that leads to the generalization to any number of copies, which we then present and prove.
Total Charge Movement per Channel
Sigg, Daniel; Bezanilla, Francisco
1997-01-01
One measure of the voltage dependence of ion channel conductance is the amount of gating charge that moves during activation and vice versa. The limiting slope method, introduced by Almers (Almers, W. 1978. Rev. Physiol. Biochem. Pharmacol. 82:96–190), exploits the relationship of charge movement and voltage sensitivity, yielding a lower limit to the range of single channel gating charge displacement. In practice, the technique is plagued by low experimental resolution due to the requirement that the logarithmic voltage sensitivity of activation be measured at very low probabilities of opening. In addition, the linear sequential models to which the original theory was restricted needed to be expanded to accommodate the complexity of mechanisms available for the activation of channels. In this communication, we refine the theory by developing a relationship between the mean activation charge displacement (a measure of the voltage sensitivity of activation) and the gating charge displacement (the integral of gating current). We demonstrate that recording the equilibrium gating charge displacement as an adjunct to the limiting slope technique greatly improves accuracy under conditions where the plots of mean activation charge displacement and gross gating charge displacement versus voltage can be superimposed. We explore this relationship for a wide variety of channel models, which include those having a continuous density of states, nonsequential activation pathways, and subconductance states. We introduce new criteria for the appropriate use of the limiting slope procedure and provide a practical example of the theory applied to low resolution simulation data. PMID:8997663
Last-position elimination-based learning automata.
Zhang, Junqi; Wang, Cheng; Zhou, MengChu
2014-12-01
An update scheme of the state probability vector of actions is critical for learning automata (LA). The most popular is the pursuit scheme that pursues the estimated optimal action and penalizes others. This paper proposes a reverse philosophy that leads to last-position elimination-based learning automata (LELA). The action graded last in terms of the estimated performance is penalized by decreasing its state probability and is eliminated when its state probability becomes zero. All active actions, that is, actions with nonzero state probability, equally share the penalized state probability from the last-position action at each iteration. The proposed LELA is characterized by the relaxed convergence condition for the optimal action, the accelerated step size of the state probability update scheme for the estimated optimal action, and the enriched sampling for the estimated nonoptimal actions. The proof of the ϵ-optimal property for the proposed algorithm is presented. Last-position elimination is a widespread philosophy in the real world and has proved to be also helpful for the update scheme of the learning automaton via the simulations of well-known benchmark environments. In the simulations, two versions of the LELA, using different selection strategies of the last action, are compared with the classical pursuit algorithms Discretized Pursuit Reward-Inaction (DP(RI)) and Discretized Generalized Pursuit Algorithm (DGPA). Simulation results show that the proposed schemes achieve significantly faster convergence and higher accuracy than the classical ones. Specifically, the proposed schemes reduce the interval to find the best parameter for a specific environment in the classical pursuit algorithms. Thus, they can have their parameter tuning easier to perform and can save much more time when applied to a practical case. Furthermore, the convergence curves and the corresponding variance coefficient curves of the contenders are illustrated to characterize their essential differences and verify the analysis results of the proposed algorithms.
Generating probabilistic Boolean networks from a prescribed transition probability matrix.
Ching, W-K; Chen, X; Tsing, N-K
2009-11-01
Probabilistic Boolean networks (PBNs) have received much attention in modeling genetic regulatory networks. A PBN can be regarded as a Markov chain process and is characterised by a transition probability matrix. In this study, the authors propose efficient algorithms for constructing a PBN when its transition probability matrix is given. The complexities of the algorithms are also analysed. This is an interesting inverse problem in network inference using steady-state data. The problem is important as most microarray data sets are assumed to be obtained from sampling the steady-state.
The impossibility of probabilities
NASA Astrophysics Data System (ADS)
Zimmerman, Peter D.
2017-11-01
This paper discusses the problem of assigning probabilities to the likelihood of nuclear terrorism events, in particular examining the limitations of using Bayesian priors for this purpose. It suggests an alternate approach to analyzing the threat of nuclear terrorism.
Study of the regional air quality south of Mexico City (Morelos state).
Salcedo, D; Castro, T; Ruiz-Suárez, L G; García-Reynoso, A; Torres-Jardón, R; Torres-Jaramillo, A; Mar-Morales, B E; Salcido, A; Celada, A T; Carreón-Sierra, S; Martínez, A P; Fentanes-Arriaga, O A; Deustúa, E; Ramos-Villegas, R; Retama-Hernández, A; Saavedra, M I; Suárez-Lastra, M
2012-01-01
Results from the first study of the regional air quality in Morelos state (located south of Mexico City) are presented. Criteria pollutants concentrations were measured at several sites within Morelos in February and March of 2007 and 2009; meteorological data was also collected along the state for the same time periods; additionally, a coupled meteorology-chemistry model (Mesoscale Climate Chemistry Model, MCCM) was used to gain understanding on the atmospheric processes occurring in the region. In general, concentrations of almost all the monitored pollutants (O(3), NO(x), CO, SO(2), PM) remained below the Mexican air quality standards during the campaign; however, relatively high concentrations of ozone (8-hour average concentrations above the 60 ppb level several times during the campaigns, i.e. exceeding the World Health Organization and the European Union maximum levels) were observed even at sites with very low reported local emissions. In fact, there is evidence that a large percentage of Morelos vegetation was probably exposed to unhealthy ozone levels (estimated AOT40 levels above the 3 ppm h critical limit). The MCCM qualitatively reproduced ozone daily variations in the sites with an urban component; though it consistently overestimated the ozone concentration in all the sites in Morelos. This is probably because the lack of an updated and detailed emission inventory for the state. The main wind patterns in the region corresponded to the mountain-valley system (downslope flows at night and during the first hours of the day, and upslope flows in the afternoon). At times, Morelos was affected by emissions from surrounding states (Distrito Federal or Puebla). The results are indicative of an efficient transport of ozone and its precursors at a regional level. They also suggest that the state is divided in two atmospheric basins by the Sierras de Tepoztlán, Texcal and Monte Negro. Copyright © 2011 Elsevier B.V. All rights reserved.
Method and device for landing aircraft dependent on runway occupancy time
NASA Technical Reports Server (NTRS)
Ghalebsaz Jeddi, Babak (Inventor)
2012-01-01
A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.
The condition of a finite Markov chain and perturbation bounds for the limiting probabilities
NASA Technical Reports Server (NTRS)
Meyer, C. D., Jr.
1979-01-01
The inequalities bounding the relative error the norm of w- w squiggly/the norm of w are exhibited by a very simple function of E and A. Let T denote the transition matrix of an ergodic chain, C, and let A = I - T. Let E be a perturbation matrix such that T squiggly = T - E is also the transition matrix of an ergodic chain, C squiggly. Let w and w squiggly denote the limiting probability (row) vectors for C and C squiggly. The inequality is the best one possible. This bound can be significant in the numerical determination of the limiting probabilities for an ergodic chain. In addition to presenting a sharp bound for the norm of w-w squiggly/the norm of w an explicit expression for w squiggly will be derived in which w squiggly is given as a function of E, A, w and some other related terms.
Invasion resistance arises in strongly interacting species-rich model competition communities.
Case, T J
1990-01-01
I assemble stable multispecies Lotka-Volterra competition communities that differ in resident species number and average strength (and variance) of species interactions. These are then invaded with randomly constructed invaders drawn from the same distribution as the residents. The invasion success rate and the fate of the residents are determined as a function of community-and species-level properties. I show that the probability of colonization success for an invader decreases with community size and the average strength of competition (alpha). Communities composed of many strongly interacting species limit the invasion possibilities of most similar species. These communities, even for a superior invading competitor, set up a sort of "activation barrier" that repels invaders when they invade at low numbers. This "priority effect" for residents is not assumed a priori in my description for the individual population dynamics of these species; rather it emerges because species-rich and strongly interacting species sets have alternative stable states that tend to disfavor species at low densities. These models point to community-level rather than invader-level properties as the strongest determinant of differences in invasion success. The probability of extinction for a resident species increases with community size, and the probability of successful colonization by the invader decreases. Thus an equilibrium community size results wherein the probability of a resident species' extinction just balances the probability of an invader's addition. Given the distribution of alpha it is now possible to predict the equilibrium species number. The results provide a logical framework for an island-biogeographic theory in which species turnover is low even in the face of persistent invasions and for the protection of fragile native species from invading exotics. PMID:11607132
NASA Astrophysics Data System (ADS)
Twarakavi, N. C.; Kaluarachchi, J. J.
2004-12-01
Arsenic is historically known be toxic to human health. Drinking water contaminated with unsafe levels of arsenic may cause cancer. The toxicity of arsenic is suggested by a MCLG of zero and a low MCL of 10 µg/L, that has been a subject of constant scrutiny. The US Environmental Protection Agency (US EPA), based on the recommendations of the National Academy of Sciences revised the MCL from 1974 value of 50 µg/L to 10 µg/L. The decision was based on a national-level analysis of arsenic concentration data collected by the National Analysis of Water Quality Assessment (NAWQA). Another factor that makes arsenic in drinking water a major issue is the widespread occurrence and a variety of sources. Arsenic occurs naturally in mineral deposits and is also contributed through anthropogenic sources. A methodology using the ordinal logistic regression (LR) method is proposed to predict the probability of occurrence of arsenic in shallow ground waters of the conterminous United States (CONUS) subject to a set of influencing variables. The analysis considered the maximum contaminant level (MCL) options of 3, 5, 10, 20, and 50 µg/L as threshold values to estimate the probabilities of arsenic occurrence in ranges defined by a given MCL and a detection limit of 1 µg/L. The fit between the observed and predicted probability of occurrence was around 83% for all MCL options. The estimated probabilities were used to estimate the median background concentration of arsenic for different aquifer types in the CONUS. The shallow ground water of the western US is more vulnerable to arsenic contamination than the eastern US. Arizona, Utah, Nevada, and California in particular are hotspots for arsenic contamination. The model results were extended for estimating the health risks and costs posed by arsenic occurrence in the ground water of the United States. The risk assessment showed that counties in southern California, Arizona, Florida, Washington States and a few others scattered throughout the CONUS face a high risk from arsenic exposure through untreated ground water consumption. The risk analysis also showed the trade-offs in using different risk estimates as decision-making tools. A simple cost effectiveness analysis was performed to understand the household costs for MCL compliance in using arsenic-contaminated ground water. The results showed that the current MCL of 10 µg/L is a good compromise based on existing treatment technologies
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
NASA Astrophysics Data System (ADS)
Nopparuchikun, Adison; Promros, Nathaporn; Sittimart, Phongsaphak; Onsee, Peeradon; Duangrawa, Asanlaya; Teakchaicum, Sakmongkon; Nogami, Tomohiro; Yoshitake, Tsuyoshi
2017-09-01
By utilizing pulsed laser deposition (PLD), heterojunctions comprised of n-type nanocrystalline (NC) FeSi2 thin films and p-type Si substrates were fabricated at room temperature in this study. Both dark and illuminated current density-voltage (J-V) curves for the heterojunctions were measured and analyzed at room temperature. The heterojunctions demonstrated a large reverse leakage current as well as a weak near-infrared light response. Based on the analysis of the dark forward J-V curves, at the V value ⩽ 0.2 V, we show that a carrier recombination process was governed at the heterojunction interface. When the V value was > 0.2 V, the probable mechanism of carrier transportation was a space-charge limited-current process. Both the measurement and analysis for capacitance-voltage-frequency (C-V-f ) and conductance-voltage-frequency (G-V-f ) curves were performed in the applied frequency (f ) range of 50 kHz-2 MHz at room temperature. From the C-V-f and G-V-f curves, the density of interface states (N ss) for the heterojunctions was computed by using the Hill-Coleman method. The N ss values were 9.19 × 1012 eV-1 cm-2 at 2 MHz and 3.15 × 1014 eV-1 cm-2 at 50 kHz, which proved the existence of interface states at the heterojunction interface. These interface states are the probable cause of the degraded electrical performance in the heterojunctions. Invited talk at 5th Thailand International Nanotechnology Conference (Nano Thailand-2016), 27-29 November 2016, Nakhon Ratchasima, Thailand.
Application of Non-Equilibrium Thermo Field Dynamics to quantum teleportation under the environment
NASA Astrophysics Data System (ADS)
Kitajima, S.; Arimitsu, T.; Obinata, M.; Yoshida, K.
2014-06-01
Quantum teleportation for continuous variables is treated by Non-Equilibrium Thermo Field Dynamics (NETFD), a canonical operator formalism for dissipative quantum systems, in order to study the effect of imperfect quantum entanglement on quantum communication. We used an entangled state constructed by two squeezed states. The entangled state is imperfect due to two reasons, i.e., one is the finiteness of the squeezing parameter r and the other comes from the process that the squeezed states are created under the dissipative interaction with the environment. We derive the expressions for one-shot fidelity (OSF), probability density function (PDF) associated with OSF and (averaged) fidelity by making full use of the algebraic manipulation of operator algebra within NETFD. We found that OSF and PDF are given by Gaussian forms with its peak at the original information α to be teleported, and that for r≫1 the variances of these quantities blow up to infinity for κ/χ≤1, while they approach to finite values for κ/χ>1. Here, χ represents the intensity of a degenerate parametric process, and κ the relaxation rate due to the interaction with the environment. The blow-up of the variances for OSF and PDF guarantees higher security against eavesdropping. With the blow-up of the variances, the height of PDF reduces to small because of the normalization of probability, while the height of OSF approaches to 1 indicating a higher performance of the quantum teleportation. We also found that in the limit κ/χ≫1 the variances of both OSF and PDF for any value of r (>0) reduce to 1 which is the same value as the case r=0, i.e., no entanglement.
A matter of tradeoffs: reintroduction as a multiple objective decision
Converse, Sarah J.; Moore, Clinton T.; Folk, Martin J.; Runge, Michael C.
2013-01-01
Decision making in guidance of reintroduction efforts is made challenging by the substantial scientific uncertainty typically involved. However, a less recognized challenge is that the management objectives are often numerous and complex. Decision makers managing reintroduction efforts are often concerned with more than just how to maximize the probability of reintroduction success from a population perspective. Decision makers are also weighing other concerns such as budget limitations, public support and/or opposition, impacts on the ecosystem, and the need to consider not just a single reintroduction effort, but conservation of the entire species. Multiple objective decision analysis is a powerful tool for formal analysis of such complex decisions. We demonstrate the use of multiple objective decision analysis in the case of the Florida non-migratory whooping crane reintroduction effort. In this case, the State of Florida was considering whether to resume releases of captive-reared crane chicks into the non-migratory whooping crane population in that state. Management objectives under consideration included maximizing the probability of successful population establishment, minimizing costs, maximizing public relations benefits, maximizing the number of birds available for alternative reintroduction efforts, and maximizing learning about the demographic patterns of reintroduced whooping cranes. The State of Florida engaged in a collaborative process with their management partners, first, to evaluate and characterize important uncertainties about system behavior, and next, to formally evaluate the tradeoffs between objectives using the Simple Multi-Attribute Rating Technique (SMART). The recommendation resulting from this process, to continue releases of cranes at a moderate intensity, was adopted by the State of Florida in late 2008. Although continued releases did not receive support from the International Whooping Crane Recovery Team, this approach does provide a template for the formal, transparent consideration of multiple, potentially competing, objectives in reintroduction decision making.
Information flow in an atmospheric model and data assimilation
NASA Astrophysics Data System (ADS)
Yoon, Young-noh
2011-12-01
Weather forecasting consists of two processes, model integration and analysis (data assimilation). During the model integration, the state estimate produced by the analysis evolves to the next cycle time according to the atmospheric model to become the background estimate. The analysis then produces a new state estimate by combining the background state estimate with new observations, and the cycle repeats. In an ensemble Kalman filter, the probability distribution of the state estimate is represented by an ensemble of sample states, and the covariance matrix is calculated using the ensemble of sample states. We perform numerical experiments on toy atmospheric models introduced by Lorenz in 2005 to study the information flow in an atmospheric model in conjunction with ensemble Kalman filtering for data assimilation. This dissertation consists of two parts. The first part of this dissertation is about the propagation of information and the use of localization in ensemble Kalman filtering. If we can perform data assimilation locally by considering the observations and the state variables only near each grid point, then we can reduce the number of ensemble members necessary to cover the probability distribution of the state estimate, reducing the computational cost for the data assimilation and the model integration. Several localized versions of the ensemble Kalman filter have been proposed. Although tests applying such schemes have proven them to be extremely promising, a full basic understanding of the rationale and limitations of localization is currently lacking. We address these issues and elucidate the role played by chaotic wave dynamics in the propagation of information and the resulting impact on forecasts. The second part of this dissertation is about ensemble regional data assimilation using joint states. Assuming that we have a global model and a regional model of higher accuracy defined in a subregion inside the global region, we propose a data assimilation scheme that produces the analyses for the global and the regional model simultaneously, considering forecast information from both models. We show that our new data assimilation scheme produces better results both in the subregion and the global region than the data assimilation scheme that produces the analyses for the global and the regional model separately.
NASA Astrophysics Data System (ADS)
Nandipati, K. R.; Kanakati, Arun Kumar; Singh, H.; Lan, Z.; Mahapatra, S.
2017-09-01
Optimal initiation of quantum dynamics of N-H photodissociation of pyrrole on the S0-1πσ∗(1A2) coupled electronic states by UV-laser pulses in an effort to guide the subsequent dynamics to dissociation limits is studied theoretically. Specifically, the task of designing optimal laser pulses that act on initial vibrational states of the system for an effective UV-photodissociation is considered by employing optimal control theory. The associated control mechanism(s) for the initial state dependent photodissociation dynamics of pyrrole in the presence of control pulses is examined and discussed in detail. The initial conditions determine implicitly the variation in the dissociation probabilities for the two channels, upon interaction with the field. The optimal pulse corresponds to the objective fixed as maximization of overall reactive flux subject to constraints of reasonable fluence and quantum dynamics. The simple optimal pulses obtained by the use of genetic algorithm based optimization are worth an experimental implementation given the experimental relevance of πσ∗-photochemistry in recent times.
NASA Astrophysics Data System (ADS)
Jana, Dipankar; Porwal, S.; Sharma, T. K.
2017-12-01
Spatial and spectral origin of deep level defects in molecular beam epitaxy grown AlGaN/GaN heterostructures are investigated by using surface photovoltage spectroscopy (SPS) and pump-probe SPS techniques. A deep trap center ∼1 eV above the valence band is observed in SPS measurements which is correlated with the yellow luminescence feature in GaN. Capture of electrons and holes is resolved by performing temperature dependent SPS and pump-probe SPS measurements. It is found that the deep trap states are distributed throughout the sample while their dominance in SPS spectra depends on the density, occupation probability of deep trap states and the background electron density of GaN channel layer. Dynamics of deep trap states associated with GaN channel layer is investigated by performing frequency dependent photoluminescence (PL) and SPS measurements. A time constant of few millisecond is estimated for the deep defects which might limit the dynamic performance of AlGaN/GaN based devices.
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
Using climate model simulations to assess the current climate risk to maize production
NASA Astrophysics Data System (ADS)
Kent, Chris; Pope, Edward; Thompson, Vikki; Lewis, Kirsty; Scaife, Adam A.; Dunstone, Nick
2017-05-01
The relationship between the climate and agricultural production is of considerable importance to global food security. However, there has been relatively little exploration of climate-variability related yield shocks. The short observational yield record does not adequately sample natural inter-annual variability thereby limiting the accuracy of probability assessments. Focusing on the United States and China, we present an innovative use of initialised ensemble climate simulations and a new agro-climatic indicator, to calculate the risk of severe water stress. Combined, these regions provide 60% of the world’s maize, and therefore, are crucial to global food security. To probe a greater range of inter-annual variability, the indicator is applied to 1400 simulations of the present day climate. The probability of severe water stress in the major maize producing regions is quantified, and in many regions an increased risk is found compared to calculations from observed historical data. Analysis suggests that the present day climate is also capable of producing unprecedented severe water stress conditions. Therefore, adaptation plans and policies based solely on observed events from the recent past may considerably under-estimate the true risk of climate-related maize shocks. The probability of a major impact event occurring simultaneously across both regions—a multi-breadbasket failure—is estimated to be up to 6% per decade and arises from a physically plausible climate state. This novel approach highlights the significance of climate impacts on crop production shocks and provides a platform for considerably improving food security assessments, in the present day or under a changing climate, as well as development of new risk based climate services.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Experimental search for the violation of Pauli exclusion principle: VIP-2 Collaboration.
Shi, H; Milotti, E; Bartalucci, S; Bazzi, M; Bertolucci, S; Bragadireanu, A M; Cargnelli, M; Clozza, A; De Paolis, L; Di Matteo, S; Egger, J-P; Elnaggar, H; Guaraldo, C; Iliescu, M; Laubenstein, M; Marton, J; Miliucci, M; Pichler, A; Pietreanu, D; Piscicchia, K; Scordo, A; Sirghi, D L; Sirghi, F; Sperandio, L; Vazquez Doce, O; Widmann, E; Zmeskal, J; Curceanu, C
2018-01-01
The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of [Formula: see text] for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.
Patrick Reilly, J
2014-10-01
Differences between IEEE C95 Standards (C95.6-2002 and C95.1-2005) in the low-frequency (1 Hz-100 kHz) and the ICNIRP-2010 guidelines appear across the frequency spectrum. Factors accounting for lack of convergence include: differences between the IEEE standards and the ICNIRP guidelines with respect to biological induction models, stated objectives, data trail from experimentally derived thresholds through physical and biological principles, selection and justification of safety/reduction factors, use of probability models, compliance standards for the limbs as distinct from the whole body, defined population categories, strategies for central nervous system protection below 20 Hz, and correspondence of environmental electric field limits with contact currents. This paper discusses these factors and makes the case for adoption of the limits in the IEEE standards.
Experimental search for the violation of Pauli exclusion principle. VIP-2 Collaboration
NASA Astrophysics Data System (ADS)
Shi, H.; Milotti, E.; Bartalucci, S.; Bazzi, M.; Bertolucci, S.; Bragadireanu, A. M.; Cargnelli, M.; Clozza, A.; De Paolis, L.; Di Matteo, S.; Egger, J.-P.; Elnaggar, H.; Guaraldo, C.; Iliescu, M.; Laubenstein, M.; Marton, J.; Miliucci, M.; Pichler, A.; Pietreanu, D.; Piscicchia, K.; Scordo, A.; Sirghi, D. L.; Sirghi, F.; Sperandio, L.; Vazquez Doce, O.; Widmann, E.; Zmeskal, J.; Curceanu, C.
2018-04-01
The VIolation of Pauli exclusion principle -2 experiment, or VIP-2 experiment, at the Laboratori Nazionali del Gran Sasso searches for X-rays from copper atomic transitions that are prohibited by the Pauli exclusion principle. Candidate direct violation events come from the transition of a 2 p electron to the ground state that is already occupied by two electrons. From the first data taking campaign in 2016 of VIP-2 experiment, we determined a best upper limit of 3.4 × 10^{-29} for the probability that such a violation exists. Significant improvement in the control of the experimental systematics was also achieved, although not explicitly reflected in the improved upper limit. By introducing a simultaneous spectral fit of the signal and background data in the analysis, we succeeded in taking into account systematic errors that could not be evaluated previously in this type of measurements.
Connections between the dynamical symmetries in the microscopic shell model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgieva, A. I., E-mail: anageorg@issp.bas.bg; Drumev, K. P.
2016-03-25
The dynamical symmetries of the microscopic shell model appear as the limiting cases of a symmetry adapted Pairing-Plus-Quadrupole Model /PQM/, with a Hamiltonian containing isoscalar and isovector pairing and quadrupole interactions. We establish a correspondence between each of the three types of pairing bases and Elliott’s SU(3) basis, that describes collective rotation of nuclear systems with quadrupole deformation. It is derived from their complementarity to the same LS coupling chain of the shell model number conserving algebra. The probability distribution of the S U(3) basis states within the pairing eigenstates is also obtained through a numerical diagonalization of the PQMmore » Hamiltonian in each limit. We introduce control parameters, which define the phase diagram of the model and determine the role of each term of the Hamiltonian in the correct reproduction of the experimental data for the considered nuclei.« less
Development of a Nonlinear Probability of Collision Tool for the Earth Observing System
NASA Technical Reports Server (NTRS)
McKinley, David P.
2006-01-01
The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.
NASA Technical Reports Server (NTRS)
Guberman, S.; Dalgarno, A.; Posen, A.; Kwok, T. L.
1986-01-01
Multiconfiguration variational calculations of the electronic wave functions of the a 3Sigma(+)g and b 3Sigma(+)u states of molecular hydrogen are presented, and the electric dipole transition moment between them (of interest in connection with stellar atmospheres and the UV spectrum of the Jovian planets) is obtained. The dipole moment is used to calculate the probabilities of radiative transitions from the discrete vibrational levels of the a 3Sigma(+)g state to the vibrational continuum of the repulsive b 3Sigma(+)u state as functions of the wavelength of the emitted photons. The total transition probabilities and radiative lifetimes of the levels v prime = 0-20 are presented.
The effect of kerosene injection on ignition probability of local ignition in a scramjet combustor
NASA Astrophysics Data System (ADS)
Bao, Heng; Zhou, Jin; Pan, Yu
2017-03-01
The spark ignition of kerosene is investigated in a scramjet combustor with a flight condition of Ma 4, 17 km. Based plentiful of experimental data, the ignition probabilities of the local ignition have been acquired for different injection setups. The ignition probability distributions show that the injection pressure and injection location have a distinct effect on spark ignition. The injection pressure has both upper and lower limit for local ignition. Generally, the larger mass flow rate will reduce the ignition probability. The ignition position also affects the ignition near the lower pressure limit. The reason is supposed to be the cavity swallow effect on upstream jet spray near the leading edge, which will make the cavity fuel rich. The corner recirculation zone near the front wall of the cavity plays a significant role in the stabilization of local flame.
Zhu, Yali; Song, Liping; Stroud, Jason; Parris, Deborah S
2008-01-01
Results suggest a high probability that abasic (AP) sites occur at least once per herpes simplex virus type 1 (HSV-1) genome. The parameters that control the ability of HSV-1 DNA polymerase (pol) to engage in AP translesion synthesis (TLS) were examined because AP lesions could influence the completion and fidelity of viral DNA synthesis. Pre-steady-state kinetic experiments demonstrated that wildtype (WT) and exonuclease-deficient (exo-) pol could incorporate opposite an AP lesion, but full TLS required absence of exo function. Virtually all of the WT pol was bound at the exo site to AP-containing primer-templates (P/Ts) at equilibrium, and the pre-steady-state rate of excision by WT pol was higher on AP-containing than on matched DNA. However, several factors influencing polymerization work synergistically with exo activity to prevent HSV-1 pol from engaging in TLS. Although the pre-steady-state catalytic rate constant for insertion of dATP opposite a T or AP site was similar, ground-state-binding affinity of dATP for insertion opposite an AP site was reduced 3-9-fold. Single-turnover running-start experiments demonstrated a reduced proportion of P/Ts extended to the AP site compared to the preceding site during processive synthesis by WT or exo- pol. Only the exo- pol engaged in TLS, though inefficiently and without burst kinetics, suggesting a much slower rate-limiting step for extension beyond the AP site.
Probability distributions of continuous measurement results for conditioned quantum evolution
NASA Astrophysics Data System (ADS)
Franquet, A.; Nazarov, Yuli V.
2017-02-01
We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.
Kendall, W.L.; Nichols, J.D.
2002-01-01
Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...
2015-12-02
The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less
Miller, G Y; Ming, J; Williams, I; Gorvett, R
2012-12-01
Foot and mouth disease (FMD) continues to be a disease of major concern for the United States Department of Agriculture (USDA) and livestock industries. Foot and mouth disease virus is a high-consequence pathogen for the United States (USA). Live animal trade is a major risk factor for introduction of FMD into a country. This research estimates the probability of FMD being introduced into the USA via the legal importation of livestock. This probability is calculated by considering the potential introduction of FMD from each country from which the USA imports live animals. The total probability of introduction into the USA of FMD from imported livestock is estimated to be 0.415% per year, which is equivalent to one introduction every 241 years. In addition, to provide a basis for evaluating the significance of risk management techniques and expenditures, the sensitivity of the above result to changes in various risk parameter assumptions is determined.
Generalized quantum theory of recollapsing homogeneous cosmologies
NASA Astrophysics Data System (ADS)
Craig, David; Hartle, James B.
2004-06-01
A sum-over-histories generalized quantum theory is developed for homogeneous minisuperspace type A Bianchi cosmological models, focusing on the particular example of the classically recollapsing Bianchi type-IX universe. The decoherence functional for such universes is exhibited. We show how the probabilities of decoherent sets of alternative, coarse-grained histories of these model universes can be calculated. We consider in particular the probabilities for classical evolution defined by a suitable coarse graining. For a restricted class of initial conditions and coarse grainings we exhibit the approximate decoherence of alternative histories in which the universe behaves classically and those in which it does not. For these situations we show that the probability is near unity for the universe to recontract classically if it expands classically. We also determine the relative probabilities of quasiclassical trajectories for initial states of WKB form, recovering for such states a precise form of the familiar heuristic “JṡdΣ” rule of quantum cosmology, as well as a generalization of this rule to generic initial states.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Quantum Trajectories and Their Statistics for Remotely Entangled Quantum Bits
NASA Astrophysics Data System (ADS)
Chantasri, Areeya; Kimchi-Schwartz, Mollie E.; Roch, Nicolas; Siddiqi, Irfan; Jordan, Andrew N.
2016-10-01
We experimentally and theoretically investigate the quantum trajectories of jointly monitored transmon qubits embedded in spatially separated microwave cavities. Using nearly quantum-noise-limited superconducting amplifiers and an optimized setup to reduce signal loss between cavities, we can efficiently track measurement-induced entanglement generation as a continuous process for single realizations of the experiment. The quantum trajectories of transmon qubits naturally split into low and high entanglement classes. The distribution of concurrence is found at any given time, and we explore the dynamics of entanglement creation in the state space. The distribution exhibits a sharp cutoff in the high concurrence limit, defining a maximal concurrence boundary. The most-likely paths of the qubits' trajectories are also investigated, resulting in three probable paths, gradually projecting the system to two even subspaces and an odd subspace, conforming to a "half-parity" measurement. We also investigate the most-likely time for the individual trajectories to reach their most entangled state, and we find that there are two solutions for the local maximum, corresponding to the low and high entanglement routes. The theoretical predictions show excellent agreement with the experimental entangled-qubit trajectory data.
NASA Astrophysics Data System (ADS)
Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran
2018-03-01
In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.
NASA Astrophysics Data System (ADS)
Koglin, Johnathon
Accurate nuclear reaction data from a few keV to tens of MeV and across the table of nuclides is essential to a number of applications of nuclear physics, including national security, nuclear forensics, nuclear astrophysics, and nuclear energy. Precise determination of (n, f) and neutron capture cross sections for reactions in high- ux environments are particularly important for a proper understanding of nuclear reactor performance and stellar nucleosynthesis. In these extreme environments reactions on short-lived and otherwise difficult-to-produce isotopes play a significant role in system evolution and provide insights into the types of nuclear processes taking place; a detailed understanding of these processes is necessary to properly determine cross sections far from stability. Indirect methods are often attempted to measure cross sections on isotopes that are difficult to separate in a laboratory setting. Using the surrogate approach, the same compound nucleus from the reaction of interest is created through a "surrogate" reaction on a different isotope and the resulting decay is measured. This result is combined with appropriate reaction theory for compound nucleus population, from which the desired cross sections can be inferred. This method has shown promise, but the theoretical framework often lacks necessary experimental data to constrain models. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(alpha, alpha'f) reaction - a surrogate for the 239Pu(n, f) - and fission of 35.9(2)MeV at eleven scattering angles from 40° to 140° in 10° intervals and at nuclear excitation energies up to 16MeV. Within experimental uncertainty, the maximum fission probability was observed at the neutron separation energy for each alpha scattering angle. Fission probabilities were separated into five 500 keV bins from 5:5MeV to 8:0MeV and one bin from 4:5MeV to 5:5MeV. Across energy bins the fission probability increases approximately linearly with increasing alpha' scattering angle. At 90° the fission probability increases from 0:069(6) in the lowest energy bin to 0:59(2) in the highest. Likewise, within a single energy bin the fission probability increases with alpha' scattering angle. Within the 6:5MeV and 7:0MeV energy bin, the fission probability increased from 0:41(1) at 60° to 0:81(10) at 140°. Fission fragment angular distributions were also measured integrated over each energy bin. These distributions were fit to theoretical distributions based on combinations of transitional nuclear vibrational and rotational excitations at the saddle point. Contributions from specific K vibrational states were extracted and combined with fission probability measurements to determine the relative fission probability of each state as a function of nuclear excitation energy. Within a given excitation energy bin, it is found that contributions from K states greater than the minimum K = 0 state tend to increase with the increasing alpha' scattering angle. This is attributed to an increase in the transferred angular momentum associated with larger scattering angles. The 90° alpha' scattering angle produced the highest quality results. The relative contributions of K states do not show a discernible trend across the energy spectrum. The energy-binned results confirm existing measurements that place a K = 2 state in the first energy bin with the opening of K = 1 and K = 4 states at energies above 5:5MeV. This experiment represents the first of its kind in which fission probabilities and angular distributions are simultaneously measured at a large number of scattering angles. The acquired fission probability, angular distribution, and K state contribution provide a diverse dataset against which microscopic fission models can be constrained and further the understanding of the properties of the 240Pu fission.
Role of noise and agents’ convictions on opinion spreading in a three-state voter-like model
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno
2013-07-01
In this work we study opinion formation in a voter-like model defined on a square lattice of linear size L. The agents may be in three different states, representing any public debate with three choices (yes, no, undecided). We consider heterogeneous agents that have different convictions about their opinions. These convictions limit the capacity of persuasion of the individuals during the interactions. Moreover, there is a noise p that represents the probability of an individual spontaneously changing his opinion to the undecided state. Our simulations suggest that the system reaches stationary states for all values of p, with consensus states occurring only for the noiseless case p = 0. In this case, the relaxation times are distributed according to a log-normal function, with the average value τ growing with the lattice size as τ ∼ Lα, where α ≈ 0.9. We found a threshold value p* ≈ 0.9 above which the stationary fraction of undecided agents is greater than the fraction of decided ones. We also study the consequences of the presence of external effects in the system, which models the influence of mass media on opinion formation.
Robust EM Continual Reassessment Method in Oncology Dose Finding
Yuan, Ying; Yin, Guosheng
2012-01-01
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092
A Gaussian measure of quantum phase noise
NASA Technical Reports Server (NTRS)
Schleich, Wolfgang P.; Dowling, Jonathan P.
1992-01-01
We study the width of the semiclassical phase distribution of a quantum state in its dependence on the average number of photons (m) in this state. As a measure of phase noise, we choose the width, delta phi, of the best Gaussian approximation to the dominant peak of this probability curve. For a coherent state, this width decreases with the square root of (m), whereas for a truncated phase state it decreases linearly with increasing (m). For an optimal phase state, delta phi decreases exponentially but so does the area caught underneath the peak: all the probability is stored in the broad wings of the distribution.
Chen, Y I; Burall, Laurel S; Macarisin, Dumitru; Pouillot, Régis; Strain, Errol; DE Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Klontz, Karl; Parish, Mickey; Evans, Peter S; Brown, Eric W; Hammack, Thomas S; Zink, Donald L; Datta, Atin R
2016-11-01
A most-probable-number (MPN) method was used to enumerate Listeria monocytogenes in 2,320 commercial ice cream scoops manufactured on a production line that was implicated in a 2015 listeriosis outbreak in the United States. The analyzed samples were collected from seven lots produced in November 2014, December 2014, January 2015, and March 2015. L. monocytogenes was detected in 99% (2,307 of 2,320) of the tested samples (lower limit of detection, 0.03 MPN/g), 92% of which were contaminated at <20 MPN/g. The levels of L. monocytogenes in these samples had a geometric mean per lot of 0.15 to 7.1 MPN/g. The prevalence and enumeration data from an unprecedented large number of naturally contaminated ice cream products linked to a listeriosis outbreak provided a unique data set for further understanding the risk associated with L. monocytogenes contamination for highly susceptible populations.
Shock compression of a recrystallized anorthositic rock from Apollo 15
NASA Technical Reports Server (NTRS)
Ahrens, T. J.; Gibbons, R. V.; O'Keefe, J. D.
1973-01-01
Hugoniot measurements on 15,418, a recrystallized and brecciated gabbroic anorthosite, yield a value of the Hugoniot elastic limit (HEL) varying from 45 to 70 kbar as the final shock pressure is varied from 70 to 280 kbar. Above the HEL and to 150 kbar, the pressure-density Hugoniot is closely described by a hydrostatic equation of state constructed from ultrasonic data for single-crystal plagioclase and pyroxene. Above 150 kbar, the Hugoniot states indicate that a series of one or more shock-induced phase changes are occurring in the plagioclase and pyroxene. From Hugoniot data for both the single-crystal minerals and the Frederick diabase, we infer that the shock-induced high-pressure phases in 15,418 probably consists of a 3.71 g/cu cm density, high-pressure structure for plagioclase and a 4.70 g/cu cm perovskite-type structure for pyroxene.
Robust distant-entanglement generation using coherent multiphoton scattering
NASA Astrophysics Data System (ADS)
Chan, Ching-Kit; Sham, L. J.
2013-03-01
The generation and controllability of entanglement between distant quantum states have been the heart of quantum computation and quantum information processing. Existing schemes for solid state qubit entanglement are based on the single-photon spectroscopy that has the merit of a high fidelity entanglement creation, but with a very limited efficiency. This severely restricts the scalability for a qubit network system. Here, we describe a new distant entanglement protocol using coherent multiphoton scattering. The scheme makes use of the postselection of large and distinguishable photon signals, and has both a high success probability and a high entanglement fidelity. Our result shows that the entanglement generation is robust against photon fluctuations, and has an average entanglement duration within the decoherence time in various qubit systems, based on existing experimental parameters. This research was supported by the U.S. Army Research Office MURI award W911NF0910406 and by NSF grant PHY-1104446.
Stochastic Dynamics through Hierarchically Embedded Markov Chains
NASA Astrophysics Data System (ADS)
Vasconcelos, Vítor V.; Santos, Fernando P.; Santos, Francisco C.; Pacheco, Jorge M.
2017-02-01
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects—such as mutations in evolutionary dynamics and a random exploration of choices in social systems—including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Stochastic Dynamics through Hierarchically Embedded Markov Chains.
Vasconcelos, Vítor V; Santos, Fernando P; Santos, Francisco C; Pacheco, Jorge M
2017-02-03
Studying dynamical phenomena in finite populations often involves Markov processes of significant mathematical and/or computational complexity, which rapidly becomes prohibitive with increasing population size or an increasing number of individual configuration states. Here, we develop a framework that allows us to define a hierarchy of approximations to the stationary distribution of general systems that can be described as discrete Markov processes with time invariant transition probabilities and (possibly) a large number of states. This results in an efficient method for studying social and biological communities in the presence of stochastic effects-such as mutations in evolutionary dynamics and a random exploration of choices in social systems-including situations where the dynamics encompasses the existence of stable polymorphic configurations, thus overcoming the limitations of existing methods. The present formalism is shown to be general in scope, widely applicable, and of relevance to a variety of interdisciplinary problems.
Measurement-based control of a mechanical oscillator at its thermal decoherence rate.
Wilson, D J; Sudhir, V; Piro, N; Schilling, R; Ghadimi, A; Kippenberg, T J
2015-08-20
In real-time quantum feedback protocols, the record of a continuous measurement is used to stabilize a desired quantum state. Recent years have seen successful applications of these protocols in a variety of well-isolated micro-systems, including microwave photons and superconducting qubits. However, stabilizing the quantum state of a tangibly massive object, such as a mechanical oscillator, remains very challenging: the main obstacle is environmental decoherence, which places stringent requirements on the timescale in which the state must be measured. Here we describe a position sensor that is capable of resolving the zero-point motion of a solid-state, 4.3-megahertz nanomechanical oscillator in the timescale of its thermal decoherence, a basic requirement for real-time (Markovian) quantum feedback control tasks, such as ground-state preparation. The sensor is based on evanescent optomechanical coupling to a high-Q microcavity, and achieves an imprecision four orders of magnitude below that at the standard quantum limit for a weak continuous position measurement--a 100-fold improvement over previous reports--while maintaining an imprecision-back-action product that is within a factor of five of the Heisenberg uncertainty limit. As a demonstration of its utility, we use the measurement as an error signal with which to feedback cool the oscillator. Using radiation pressure as an actuator, the oscillator is cold damped with high efficiency: from a cryogenic-bath temperature of 4.4 kelvin to an effective value of 1.1 ± 0.1 millikelvin, corresponding to a mean phonon number of 5.3 ± 0.6 (that is, a ground-state probability of 16 per cent). Our results set a new benchmark for the performance of a linear position sensor, and signal the emergence of mechanical oscillators as practical subjects for measurement-based quantum control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welsch, Ralph, E-mail: rwelsch@uni-bielefeld.de; Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de
2015-02-14
Initial state-selected reaction probabilities of the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction are calculated in full and reduced dimensionality on a recent neural network potential [X. Xu, J. Chen, and D. H. Zhang, Chin. J. Chem. Phys. 27, 373 (2014)]. The quantum dynamics calculation employs the quantum transition state concept and the multi-layer multi-configurational time-dependent Hartree approach and rigorously studies the reaction for vanishing total angular momentum (J = 0). The calculations investigate the accuracy of the neutral network potential and study the effect resulting from a reduced-dimensional treatment. Very good agreement is found betweenmore » the present results obtained on the neural network potential and previous results obtained on a Shepard interpolated potential energy surface. The reduced-dimensional calculations only consider motion in eight degrees of freedom and retain the C{sub 3v} symmetry of the methyl fragment. Considering reaction starting from the vibrational ground state of methane, the reaction probabilities calculated in reduced dimensionality are moderately shifted in energy compared to the full-dimensional ones but otherwise agree rather well. Similar agreement is also found if reaction probabilities averaged over similar types of vibrational excitation of the methane reactant are considered. In contrast, significant differences between reduced and full-dimensional results are found for reaction probabilities starting specifically from symmetric stretching, asymmetric (f{sub 2}-symmetric) stretching, or e-symmetric bending excited states of methane.« less
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
Electoral Susceptibility and Entropically Driven Interactions
NASA Astrophysics Data System (ADS)
Caravan, Bassir; Levine, Gregory
2013-03-01
In the United States electoral system the election is usually decided by the electoral votes cast by a small number of ``swing states'' where the two candidates historically have roughly equal probabilities of winning. The effective value of a swing state is determined not only by the number of its electoral votes but by the frequency of its appearance in the set of winning partitions of the electoral college. Since the electoral vote values of swing states are not identical, the presence or absence of a state in a winning partition is generally correlated with the frequency of appearance of other states and, hence, their effective values. We quantify the effective value of states by an electoral susceptibility, χj, the variation of the winning probability with the ``cost'' of changing the probability of winning state j. Associating entropy with the logarithm of the number of appearances of a state within the set of winning partitions, the entropy per state (in effect, the chemical potential) is not additive and the states may be said to ``interact.'' We study χj for a simple model with a Zipf's law type distribution of electoral votes. We show that the susceptibility for small states is largest in ``one-sided'' electoral contests and smallest in close contests. This research was supported by Department of Energy DE-FG02-08ER64623, Research Corporation CC6535 (GL) and HHMI Scholar Program (BC)
NASA Astrophysics Data System (ADS)
Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping
2015-05-01
It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.
Optimal structure of metaplasticity for adaptive learning
2017-01-01
Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
Mortality, or Probability of Death, from a Suicidal Act in the United States
ERIC Educational Resources Information Center
Friedmann, Harry; Kohn, Robert
2008-01-01
The probability of death resulting from a suicidal act as a function of age is explored. Until recently, data on suicide attempts in the United States were not available, and therefore the relationship between attempts and completed suicide could not be systematically investigated. Now, with new surveillance of self-harm data from the Centers for…
Contraceptive failure in the United States
Trussell, James
2013-01-01
This review provides an update of previous estimates of first-year probabilities of contraceptive failure for all methods of contraception available in the United States. Estimates are provided of probabilities of failure during typical use (which includes both incorrect and inconsistent use) and during perfect use (correct and consistent use). The difference between these two probabilities reveals the consequences of imperfect use; it depends both on how unforgiving of imperfect use a method is and on how hard it is to use that method perfectly. These revisions reflect new research on contraceptive failure both during perfect use and during typical use. PMID:21477680
On the number of infinite geodesics and ground states in disordered systems
NASA Astrophysics Data System (ADS)
Wehr, Jan
1997-04-01
We study first-passage percolation models and their higher dimensional analogs—models of surfaces with random weights. We prove that under very general conditions the number of lines or, in the second case, hypersurfaces which locally minimize the sum of the random weights is with probability one equal to 0 or with probability one equal to +∞. As corollaries we show that in any dimension d≥2 the number of ground states of an Ising ferromagnet with random coupling constants equals (with probability one) 2 or +∞. Proofs employ simple large-deviation estimates and ergodic arguments.
Small violations of Bell inequalities for multipartite pure random states
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.
2018-05-01
For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manthe, Uwe, E-mail: uwe.manthe@uni-bielefeld.de; Ellerbrock, Roman, E-mail: roman.ellerbrock@uni-bielefeld.de
2016-05-28
A new approach for the quantum-state resolved analysis of polyatomic reactions is introduced. Based on the singular value decomposition of the S-matrix, energy-dependent natural reaction channels and natural reaction probabilities are defined. It is shown that the natural reaction probabilities are equal to the eigenvalues of the reaction probability operator [U. Manthe and W. H. Miller, J. Chem. Phys. 99, 3411 (1993)]. Consequently, the natural reaction channels can be interpreted as uniquely defined pathways through the transition state of the reaction. The analysis can efficiently be combined with reactive scattering calculations based on the propagation of thermal flux eigenstates. Inmore » contrast to a decomposition based straightforwardly on thermal flux eigenstates, it does not depend on the choice of the dividing surface separating reactants from products. The new approach is illustrated studying a prototypical example, the H + CH{sub 4} → H{sub 2} + CH{sub 3} reaction. The natural reaction probabilities and the contributions of the different vibrational states of the methyl product to the natural reaction channels are calculated and discussed. The relation between the thermal flux eigenstates and the natural reaction channels is studied in detail.« less
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
Greenhouse-gas emission targets for limiting global warming to 2 degrees C.
Meinshausen, Malte; Meinshausen, Nicolai; Hare, William; Raper, Sarah C B; Frieler, Katja; Knutti, Reto; Frame, David J; Allen, Myles R
2009-04-30
More than 100 countries have adopted a global warming limit of 2 degrees C or below (relative to pre-industrial levels) as a guiding principle for mitigation efforts to reduce climate change risks, impacts and damages. However, the greenhouse gas (GHG) emissions corresponding to a specified maximum warming are poorly known owing to uncertainties in the carbon cycle and the climate response. Here we provide a comprehensive probabilistic analysis aimed at quantifying GHG emission budgets for the 2000-50 period that would limit warming throughout the twenty-first century to below 2 degrees C, based on a combination of published distributions of climate system properties and observational constraints. We show that, for the chosen class of emission scenarios, both cumulative emissions up to 2050 and emission levels in 2050 are robust indicators of the probability that twenty-first century warming will not exceed 2 degrees C relative to pre-industrial temperatures. Limiting cumulative CO(2) emissions over 2000-50 to 1,000 Gt CO(2) yields a 25% probability of warming exceeding 2 degrees C-and a limit of 1,440 Gt CO(2) yields a 50% probability-given a representative estimate of the distribution of climate system properties. As known 2000-06 CO(2) emissions were approximately 234 Gt CO(2), less than half the proven economically recoverable oil, gas and coal reserves can still be emitted up to 2050 to achieve such a goal. Recent G8 Communiqués envisage halved global GHG emissions by 2050, for which we estimate a 12-45% probability of exceeding 2 degrees C-assuming 1990 as emission base year and a range of published climate sensitivity distributions. Emissions levels in 2020 are a less robust indicator, but for the scenarios considered, the probability of exceeding 2 degrees C rises to 53-87% if global GHG emissions are still more than 25% above 2000 levels in 2020.
Minimum error discrimination between similarity-transformed quantum states
NASA Astrophysics Data System (ADS)
Jafarizadeh, M. A.; Sufiani, R.; Mazhari Khiavi, Y.
2011-07-01
Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreducible representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.
Minimum error discrimination between similarity-transformed quantum states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafarizadeh, M. A.; Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795; Research Institute for Fundamental Sciences, Tabriz 51664
2011-07-15
Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreduciblemore » representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.« less
Weather-centric rangeland revegetation planning
Hardegree, Stuart P.; Abatzoglou, John T.; Brunson, Mark W.; Germino, Matthew; Hegewisch, Katherine C.; Moffet, Corey A.; Pilliod, David S.; Roundy, Bruce A.; Boehm, Alex R.; Meredith, Gwendwr R.
2018-01-01
Invasive annual weeds negatively impact ecosystem services and pose a major conservation threat on semiarid rangelands throughout the western United States. Rehabilitation of these rangelands is challenging due to interannual climate and subseasonal weather variability that impacts seed germination, seedling survival and establishment, annual weed dynamics, wildfire frequency, and soil stability. Rehabilitation and restoration outcomes could be improved by adopting a weather-centric approach that uses the full spectrum of available site-specific weather information from historical observations, seasonal climate forecasts, and climate-change projections. Climate data can be used retrospectively to interpret success or failure of past seedings by describing seasonal and longer-term patterns of environmental variability subsequent to planting. A more detailed evaluation of weather impacts on site conditions may yield more flexible adaptive-management strategies for rangeland restoration and rehabilitation, as well as provide estimates of transition probabilities between desirable and undesirable vegetation states. Skillful seasonal climate forecasts could greatly improve the cost efficiency of management treatments by limiting revegetation activities to time periods where forecasts suggest higher probabilities of successful seedling establishment. Climate-change projections are key to the application of current environmental models for development of mitigation and adaptation strategies and for management practices that require a multidecadal planning horizon. Adoption of new weather technology will require collaboration between land managers and revegetation specialists and modifications to the way we currently plan and conduct rangeland rehabilitation and restoration in the Intermountain West.
Brief state-of-the-art review on optical communications for the NASA ISES workshop
NASA Technical Reports Server (NTRS)
Hendricks, Herbert D.
1990-01-01
The current state of the art of optical communications is briefly reviewed. This review covers NASA programs, DOD and other government agency programs, commercial aerospace programs, and foreign programs. Included is a brief summary of a recent NASA workshop on optical communications. The basic conclusions from all the program reviews is that optical communications is a technology ready to be accepted but needed to be demonstrated. Probably the most advanced and sophisticated optical communications system is the Laser Intersatellite Transmission Experiment (LITE) system developed for flight on the Advanced Communications Technology Satellite (ACTS). Optical communications technology is available for the applications of data communications at data rates in the under 300 MBits/sec for nearly all applications under 2 times GEO distances. Applications for low-earth orbiter (LEO) to ground will allow data rates in the multi-GBits/sec range. Higher data rates are limited by currently available laser power. Phased array lasers offer technology which should eliminate this problem. The major problem of cloud coverage can probably be eliminated by look ahead pointing, multiple ground stations, and knowledge of weather conditions to control the pointing. Most certainly, optical communications offer a new spectral region to relieve the RF bands and very high data communications rates that will be required in less than 10 years to solve the communications problems on Earth.
Ion-photon entanglement and quantum frequency conversion with trapped Ba+ ions.
Siverns, J D; Li, X; Quraishi, Q
2017-01-20
Trapped ions are excellent candidates for quantum nodes, as they possess many desirable features of a network node including long lifetimes, on-site processing capability, and production of photonic flying qubits. However, unlike classical networks in which data may be transmitted in optical fibers and where the range of communication is readily extended with amplifiers, quantum systems often emit photons that have a limited propagation range in optical fibers and, by virtue of the nature of a quantum state, cannot be noiselessly amplified. Here, we first describe a method to extract flying qubits from a Ba+ trapped ion via shelving to a long-lived, low-lying D-state with higher entanglement probabilities compared with current strong and weak excitation methods. We show a projected fidelity of ≈89% of the ion-photon entanglement. We compare several methods of ion-photon entanglement generation, and we show how the fidelity and entanglement probability varies as a function of the photon collection optic's numerical aperture. We then outline an approach for quantum frequency conversion of the photons emitted by the Ba+ ion to the telecommunication range for long-distance networking and to 780 nm for potential entanglement with rubidium-based quantum memories. Our approach is significant for extending the range of quantum networks and for the development of hybrid quantum networks compromised of different types of quantum memories.
Cognitive functioning in centenarians: a coordinated analysis of results from three countries.
Hagberg, B; Bauer Alfredson, B; Poon, L W; Homma, A
2001-05-01
Cognitive functions among centenarians in Japan, Sweden, and the United States are described. Three areas are explored. First, definitions and prevalence of dementia are compared between Japan and SWEDEN: Second, levels of cognitive performances between centenarians and younger age groups are presented. Third, interindividual variations in cognitive performances in centenarians and younger persons are compared in Sweden and the United STATES: The Swedish and Japanese studies show a variation in prevalence of dementia between 40% and 63% with a relatively higher prevalence among women. Part of the variance is probably due to differences in sampling and criteria of dementia. Along with the lower cognitive performance in centenarians, compared with younger age groups, the Swedish and U.S. results show a wider range of performance among centenarians for those semantic or experientially related abilities that tend to be maintained over the adult life span. In contrast, a smaller range of performance is found for centenarians on those fluid or process-related abilities that have shown a downward age-related trajectory of performance. Lower variability is probably due to centenarians reaching the lower performance limit. The conclusions agree with the assumption of a general increase in cognitive differentiation with increasing age, primarily in measures of crystallized intelligence. The conclusions point to the general robustness of results across countries, as well as to the relative importance of cognition for longevity.
Drivers and rates of stock assessments in the United States
Thorson, James T.; Melnychuk, Michael C.; Methot, Richard; Blackhart, Kristan
2018-01-01
Fisheries management is most effective when based on scientific estimates of sustainable fishing rates. While some simple approaches allow estimation of harvest limits, more data-intensive stock assessments are generally required to evaluate the stock’s biomass and fishing rates relative to sustainable levels. Here we evaluate how stock characteristics relate to the rate of new assessments in the United States. Using a statistical model based on time-to-event analysis and 569 coastal marine fish and invertebrate stocks landed in commercial fisheries, we quantify the impact of region, habitat, life-history, and economic factors on the annual probability of being assessed. Although the majority of landings come from assessed stocks in all regions, less than half of the regionally-landed species currently have been assessed. As expected, our time-to-event model identified landed tonnage and ex-vessel price as the dominant factors determining increased rates of new assessments. However, we also found that after controlling for landings and price, there has been a consistent bias towards assessing larger-bodied species. A number of vulnerable groups such as rockfishes (Scorpaeniformes) and groundsharks (Carcharhiniformes) have a relatively high annual probability of being assessed after controlling for their relatively small tonnage and low price. Due to relatively low landed tonnage and price of species that are currently unassessed, our model suggests that the number of assessed stocks will increase more slowly in future decades. PMID:29750789
Fabric and connectivity as field descriptors for deformations in granular media
NASA Astrophysics Data System (ADS)
Wan, Richard; Pouragha, Mehdi
2015-01-01
Granular materials involve microphysics across the various scales giving rise to distinct behaviours of geomaterials, such as steady states, plastic limit states, non-associativity of plastic and yield flow, as well as instability of homogeneous deformations through strain localization. Incorporating such micro-scale characteristics is one of the biggest challenges in the constitutive modelling of granular materials, especially when micro-variables may be interdependent. With this motivation, we use two micro-variables such as coordination number and fabric anisotropy computed from tessellation of the granular material to describe its state at the macroscopic level. In order to capture functional dependencies between micro-variables, the correlation between coordination number and fabric anisotropy limits is herein formulated at the particle level rather than on an average sense. This is the essence of the proposed work which investigates the evolutions of coordination number distribution (connectivity) and anisotropy (contact normal) distribution curves with deformation history and their inter-dependencies through discrete element modelling in two dimensions. These results enter as probability distribution functions into homogenization expressions during upscaling to a continuum constitutive model using tessellation as an abstract representation of the granular system. The end product is a micro-mechanically inspired continuum model with both coordination number and fabric anisotropy as underlying micro-variables incorporated into a plasticity flow rule. The derived plastic potential bears striking resemblance to cam-clay or stress-dilatancy-type yield surfaces used in soil mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, G.A.; Alderfer, R.J.
In response to a request from the Ohio Department of Health, an investigation was undertaken of possible hazardous working conditions at One Government Center, a modern 22 story municipal office building located in downtown Toledo, Ohio. Employees reported fatigue, nausea, headache, and other effects perhaps linked to poor indoor air quality. The building housed offices for the city of Toledo, the county and the state of Ohio. Questionnaires were administered to workers, and air quality measurements were made on floors 15 through 22. For the most part the concentration of carbon-dioxide (124389) was below the acceptable limit (1000 parts permore » million) with two exceptions which probably reflected a higher occupancy level and more extensive use of office partitions. Temperature and humidity levels measured were all within the acceptable limits. Respirable particulate levels in a smoking lounge located on the seventeenth floor were 454 micrograms/cubic meter and exceeded the recommended limit of 150 micrograms/cubic meter. The authors conclude that the indoor air quality parameters were within acceptable limits in most of the areas. The authors recommend that the existing smoking policy should be modified, and that the number of employees in specific areas be reduced or the ventilation in these same areas should be increased.« less
The limits of weak selection and large population size in evolutionary game theory.
Sample, Christine; Allen, Benjamin
2017-11-01
Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the [Formula: see text] limit, in which weak selection is applied before the large population limit, and the [Formula: see text] limit, in which the order is reversed. Formal mathematical definitions of the [Formula: see text] and [Formula: see text] limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the [Formula: see text] and [Formula: see text] limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.
On the delay analysis of a TDMA channel with finite buffer capacity
NASA Technical Reports Server (NTRS)
Yan, T.-Y.
1982-01-01
The throughput performance of a TDMA channel with finite buffer capacity for transmitting data messages is considered. Each station has limited message buffer capacity and has Poisson message arrivals. Message arrivals will be blocked if the buffers are congested. Using the embedded Markov chain model, the solution procedure for the limiting system-size probabilities is presented in a recursive fashion. Numerical examples are given to demonstrate the tradeoffs between the blocking probabilities and the buffer sizing strategy.
Probability Forecasting Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Duncan, M.; Frisbee, J.; Wysack, J.
2014-09-01
Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2007-12-01
We question the relevance of climate-model based Bayesian (or other) probability statements for decision support and impact assessment on spatial scales less than continental and temporal averages less than seasonal. Scientific assessment of higher resolution space and time scale information is urgently needed, given the commercial availability of "products" at high spatiotemporal resolution, their provision by nationally funded agencies for use both in industry decision making and governmental policy support, and their presentation to the public as matters of fact. Specifically we seek to establish necessary conditions for probability forecasts (projections conditioned on a model structure and a forcing scenario) to be taken seriously as reflecting the probability of future real-world events. We illustrate how risk management can profitably employ imperfect models of complicated chaotic systems, following NASA's study of near-Earth PHOs (Potentially Hazardous Objects). Our climate models will never be perfect, nevertheless the space and time scales on which they provide decision- support relevant information is expected to improve with the models themselves. Our aim is to establish a set of baselines of internal consistency; these are merely necessary conditions (not sufficient conditions) that physics based state-of-the-art models are expected to pass if their output is to be judged decision support relevant. Probabilistic Similarity is proposed as one goal which can be obtained even when our models are not empirically adequate. In short, probabilistic similarity requires that, given inputs similar to today's empirical observations and observational uncertainties, we expect future models to produce similar forecast distributions. Expert opinion on the space and time scales on which we might reasonably expect probabilistic similarity may prove of much greater utility than expert elicitation of uncertainty in parameter values in a model that is not empirically adequate; this may help to explain the reluctance of experts to provide information on "parameter uncertainty." Probability statements about the real world are always conditioned on some information set; they may well be conditioned on "False" making them of little value to a rational decision maker. In other instances, they may be conditioned on physical assumptions not held by any of the modellers whose model output is being cast as a probability distribution. Our models will improve a great deal in the next decades, and our insight into the likely climate fifty years hence will improve: maintaining the credibility of the science and the coherence of science based decision support, as our models improve, require a clear statement of our current limitations. What evidence do we have that today's state-of-the-art models provide decision-relevant probability forecasts? What space and time scales do we currently have quantitative, decision-relevant information on for 2050? 2080?
Weiss, Nicole H; Tull, Matthew T; Dixon-Gordon, Katherine L; Gratz, Kim L
2014-01-01
Although previous literature highlights the robust relationship between posttraumatic stress disorder (PTSD) and emotion dysregulation across diverse racial/ethnic populations, few studies have examined factors that may influence levels of emotion dysregulation among African American individuals with PTSD. The goal of the current study was to extend previous findings by examining the moderating role of gender in the relationship between PTSD and emotion dysregulation in an African American sample. Participants were 107 African American undergraduates enrolled in a historically black college in the southern United States who reported exposure to a Criterion A traumatic event. Participants with probable PTSD (vs. no PTSD) reported significantly greater emotion dysregulation, both overall and across many of the specific dimensions. Although the main effect of gender on emotion dysregulation was not statistically significant, results revealed a significant interaction between gender and probable PTSD status for overall emotion dysregulation and the specific dimensions of difficulties controlling impulsive behaviors when distressed, limited access to emotion regulation strategies perceived as effective, and lack of emotional clarity. Specifically, post-hoc analyses revealed a significant association between probable PTSD and heightened emotion dysregulation among African American women but not African American men, with African American women with probable PTSD reporting significantly higher levels of these dimensions of emotion dysregulation than all other groups. Findings highlight the relevance of emotion dysregulation to PTSD among African American women in particular, suggesting the importance of assessing and treating emotion dysregulation within this population. PMID:25392846
A comprehensive parameterization was developed for the heterogeneous reaction probability (γ) of N2O5 as a function of temperature, relative humidity, particle composition, and phase state, for use in advanced air quality models. The reaction probabilities o...
Anticipating abrupt shifts in temporal evolution of probability of eruption
NASA Astrophysics Data System (ADS)
Rohmer, J.; Loschetter, A.
2016-04-01
Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.
The estimated lifetime probability of acquiring human papillomavirus in the United States.
Chesson, Harrell W; Dunne, Eileen F; Hariri, Susan; Markowitz, Lauri E
2014-11-01
Estimates of the lifetime probability of acquiring human papillomavirus (HPV) can help to quantify HPV incidence, illustrate how common HPV infection is, and highlight the importance of HPV vaccination. We developed a simple model, based primarily on the distribution of lifetime numbers of sex partners across the population and the per-partnership probability of acquiring HPV, to estimate the lifetime probability of acquiring HPV in the United States in the time frame before HPV vaccine availability. We estimated the average lifetime probability of acquiring HPV among those with at least 1 opposite sex partner to be 84.6% (range, 53.6%-95.0%) for women and 91.3% (range, 69.5%-97.7%) for men. Under base case assumptions, more than 80% of women and men acquire HPV by age 45 years. Our results are consistent with estimates in the existing literature suggesting a high lifetime probability of HPV acquisition and are supported by cohort studies showing high cumulative HPV incidence over a relatively short period, such as 3 to 5 years.
Magruder, J Trent; Blasco-Colmenares, Elena; Crawford, Todd; Alejo, Diane; Conte, John V; Salenger, Rawn; Fonner, Clifford E; Kwon, Christopher C; Bobbitt, Jennifer; Brown, James M; Nelson, Mark G; Horvath, Keith A; Whitman, Glenn R
2017-01-01
Variation in red blood cell (RBC) transfusion practices exists at cardiac surgery centers across the nation. We tested the hypothesis that significant variation in RBC transfusion practices between centers in our state's cardiac surgery quality collaborative remains even after risk adjustment. Using a multiinstitutional statewide database created by the Maryland Cardiac Surgery Quality Initiative (MCSQI), we included patient-level data from 8,141 patients undergoing isolated coronary artery bypass (CAB) or aortic valve replacement at 1 of 10 centers. Risk-adjusted multivariable logistic regression models were constructed to predict the need for any intraoperative RBC transfusion, as well as for any postoperative RBC transfusion, with anonymized center number included as a factor variable. Unadjusted intraoperative RBC transfusion probabilities at the 10 centers ranged from 13% to 60%; postoperative RBC transfusion probabilities ranged from 16% to 41%. After risk adjustment with demographic, comorbidity, and operative data, significant intercenter variability was documented (intraoperative probability range, 4% -59%; postoperative probability range, 13%-39%). When stratifying patients by preoperative hematocrit quartiles, significant variability in intraoperative transfusion probability was seen among all quartiles (lowest quartile: mean hematocrit value, 30.5% ± 4.1%, probability range, 17%-89%; highest quartile: mean hematocrit value, 44.8% ± 2.5%; probability range, 1%-35%). Significant variation in intercenter RBC transfusion practices exists for both intraoperative and postoperative transfusions, even after risk adjustment, among our state's centers. Variability in intraoperative RBC transfusion persisted across quartiles of preoperative hematocrit values. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Mental Health Diagnoses 3 Years After Receiving or Being Denied an Abortion in the United States.
Biggs, M Antonia; Neuhaus, John M; Foster, Diana G
2015-12-01
We set out to assess the occurrence of new depression and anxiety diagnoses in women 3 years after they sought an abortion. We conducted semiannual telephone interviews of 956 women who sought abortions from 30 US facilities. Adjusted multivariable discrete-time logistic survival models examined whether the study group (women who obtained abortions just under a facility's gestational age limit, who were denied abortions and carried to term, who were denied abortions and did not carry to term, and who received first-trimester abortions) predicted depression or anxiety onset during seven 6-month time intervals. The 3-year cumulative probability of professionally diagnosed depression was 9% to 14%; for anxiety it was 10% to 15%, with no study group differences. Women in the first-trimester group and women denied abortions who did not give birth had greater odds of new self-diagnosed anxiety than did women who obtained abortions just under facility gestational limits. Among women seeking abortions near facility gestational limits, those who obtained abortions were at no greater mental health risk than were women who carried an unwanted pregnancy to term.
The (virtual) conceptual necessity of quantum probabilities in cognitive psychology.
Blutner, Reinhard; beim Graben, Peter
2013-06-01
We propose a way in which Pothos & Busemeyer (P&B) could strengthen their position. Taking a dynamic stance, we consider cognitive tests as functions that transfer a given input state into the state after testing. Under very general conditions, it can be shown that testable properties in cognition form an orthomodular lattice. Gleason's theorem then yields the conceptual necessity of quantum probabilities (QP).
Spatio-Temporal Pattern Recognition Using Hidden Markov Models
1994-06-01
Jersey, 1982. 5. H. B . Barlow and W. R. Levick . The mechanism of directionally selective units in rabbit’s retina. Journal of Physiology (London), 178:477...108 A.2.2 Re-estimate of .. .. ................... .110 A.2.3 Re-estimate of B ...... ................... 110 A.3 Logarithmic Form of the Baum-Welch...19 a0 Transition Probability from State i to State j ................ 19 B Observation Probability Matrix
Allocating Fire Mitigation Funds on the Basis of the Predicted Probabilities of Forest Wildfire
Ronald E. McRoberts; Greg C. Liknes; Mark D. Nelson; Krista M. Gebert; R. James Barbour; Susan L. Odell; Steven C. Yaddof
2005-01-01
A logistic regression model was used with map-based information to predict the probability of forest fire for forested areas of the United States. Model parameters were estimated using a digital layer depicting the locations of wildfires and satellite imagery depicting thermal hotspots. The area of the United States in the upper 50th percentile with respect to...
Moving Out: Transition to Non-Residence among Resident Fathers in the United States, 1968-1997
ERIC Educational Resources Information Center
Gupta, Sanjiv; Smock, Pamela J.; Manning, Wendy D.
2004-01-01
This article provides the first individual-level estimates of the change over time in the probability of non-residence for initially resident fathers in the United States. Drawing on the 1968-1997 waves of the Panel Study of Income Dynamics, we used discrete-time event history models to compute the probabilities of non-residence for six 5-year…