Sample records for probability bounds analysis

  1. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  2. Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos

    2015-05-01

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  3. Probability bounds analysis for nonlinear population ecology models.

    PubMed

    Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A

    2015-09-01

    Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.

  4. Clopper-Pearson bounds from HEP data cuts

    NASA Astrophysics Data System (ADS)

    Berg, B. A.

    2001-08-01

    For the measurement of Ns signals in N events rigorous confidence bounds on the true signal probability pexact were established in a classical paper by Clopper and Pearson [Biometrica 26, 404 (1934)]. Here, their bounds are generalized to the HEP situation where cuts on the data tag signals with probability Ps and background data with likelihood Pb

  5. Sensitivity analysis of limit state functions for probability-based plastic design

    NASA Technical Reports Server (NTRS)

    Frangopol, D. M.

    1984-01-01

    The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.

  6. Photoexcited escape probability, optical gain, and noise in quantum well infrared photodetectors

    NASA Technical Reports Server (NTRS)

    Levine, B. F.; Zussman, A.; Gunapala, S. D.; Asom, M. T.; Kuo, J. M.; Hobson, W. S.

    1992-01-01

    We present a detailed and thorough study of a wide variety of quantum well infrared photodetectors (QWIPs), which were chosen to have large differences in their optical and transport properties. Both n- and p-doped QWIPs, as well as intersubband transitions based on photoexcitation from bound-to-bound, bound-to-quasi-continuum, and bound-to-continuum quantum well states were investigated. The measurements and theoretical analysis included optical absorption, responsivity, dark current, current noise, optical gain, hot carrier mean free path; net quantum efficiency, quantum well escape probability, quantum well escape time, as well as detectivity. These results allow a better understanding of the optical and transport physics and thus a better optimization of the QWIP performance.

  7. SURE reliability analysis: Program and mathematics

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; White, Allan L.

    1988-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  8. Qualitative fusion technique based on information poor system and its application to factor analysis for vibration of rolling bearings

    NASA Astrophysics Data System (ADS)

    Xia, Xintao; Wang, Zhongyu

    2008-10-01

    For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.

  9. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  10. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less

  11. Comonotonic bounds on the survival probabilities in the Lee-Carter model for mortality projection

    NASA Astrophysics Data System (ADS)

    Denuit, Michel; Dhaene, Jan

    2007-06-01

    In the Lee-Carter framework, future survival probabilities are random variables with an intricate distribution function. In large homogeneous portfolios of life annuities, value-at-risk or conditional tail expectation of the total yearly payout of the company are approximately equal to the corresponding quantities involving random survival probabilities. This paper aims to derive some bounds in the increasing convex (or stop-loss) sense on these random survival probabilities. These bounds are obtained with the help of comonotonic upper and lower bounds on sums of correlated random variables.

  12. Rapidly assessing the probability of exceptionally high natural hazard losses

    NASA Astrophysics Data System (ADS)

    Gollini, Isabella; Rougier, Jonathan

    2014-05-01

    One of the objectives in catastrophe modeling is to assess the probability distribution of losses for a specified period, such as a year. From the point of view of an insurance company, the whole of the loss distribution is interesting, and valuable in determining insurance premiums. But the shape of the righthand tail is critical, because it impinges on the solvency of the company. A simple measure of the risk of insolvency is the probability that the annual loss will exceed the company's current operating capital. Imposing an upper limit on this probability is one of the objectives of the EU Solvency II directive. If a probabilistic model is supplied for the loss process, then this tail probability can be computed, either directly, or by simulation. This can be a lengthy calculation for complex losses. Given the inevitably subjective nature of quantifying loss distributions, computational resources might be better used in a sensitivity analysis. This requires either a quick approximation to the tail probability or an upper bound on the probability, ideally a tight one. We present several different bounds, all of which can be computed nearly instantly from a very general event loss table. We provide a numerical illustration, and discuss the conditions under which the bound is tight. Although we consider the perspective of insurance and reinsurance companies, exactly the same issues concern the risk manager, who is typically very sensitive to large losses.

  13. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  14. The SURE reliability analysis program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  15. Soft-decision decoding techniques for linear block codes and their error performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1996-01-01

    The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.

  16. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  17. Frequency analysis of uncertain structures using imprecise probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modares, Mehdi; Bergerson, Joshua

    2015-01-01

    Two new methods for finite element based frequency analysis of a structure with uncertainty are developed. An imprecise probability formulation based on enveloping p-boxes is used to quantify the uncertainty present in the mechanical characteristics of the structure. For each element, independent variations are considered. Using the two developed methods, P-box Frequency Analysis (PFA) and Interval Monte-Carlo Frequency Analysis (IMFA), sharp bounds on natural circular frequencies at different probability levels are obtained. These methods establish a framework for handling incomplete information in structural dynamics. Numerical example problems are presented that illustrate the capabilities of the new methods along with discussionsmore » on their computational efficiency.« less

  18. Dynamic Studies of Struve Double Stars: STF4 and STF 236AB Appear Gravitationally Bound

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.; Rica, F. M.

    2015-01-01

    Dynamics of two Struve double stars, WDS 00099+0827 (STF 4) and WDS 02556+2652 (STF 326 AB) are analyzed using astrometric criteria to determine their natures as gravitationally bound or unbound systems. If gravitationally bound, then observed relative velocity will be within limits according to the orbital energy conservation equation. Full implementation of this criterion was possible because the relative radial velocities as well as proper motions have been estimated. Other physical parameters were taken from literature or estimated using published protocols. Monte Carlo analysis indicates that both pairs have a high probability of being gravitationally bound and thus are long-period binaries.

  19. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    NASA Astrophysics Data System (ADS)

    Ngampitipan, Tritos; Boonserm, Petarpa; Chatrabhuti, Auttakit; Visser, Matt

    2016-06-01

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy, the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.

  20. The rigorous bound on the transmission probability for massless scalar field of non-negative-angular-momentum mode emitted from a Myers-Perry black hole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngampitipan, Tritos, E-mail: tritos.ngampitipan@gmail.com; Particle Physics Research Laboratory, Department of Physics, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok 10330; Boonserm, Petarpa, E-mail: petarpa.boonserm@gmail.com

    Hawking radiation is the evidence for the existence of black hole. What an observer can measure through Hawking radiation is the transmission probability. In the laboratory, miniature black holes can successfully be generated. The generated black holes are, most commonly, Myers-Perry black holes. In this paper, we will derive the rigorous bounds on the transmission probabilities for massless scalar fields of non-negative-angular-momentum modes emitted from a generated Myers-Perry black hole in six, seven, and eight dimensions. The results show that for low energy, the rigorous bounds increase with the increase in the energy of emitted particles. However, for high energy,more » the rigorous bounds decrease with the increase in the energy of emitted particles. When the black holes spin faster, the rigorous bounds decrease. For dimension dependence, the rigorous bounds also decrease with the increase in the number of extra dimensions. Furthermore, as comparison to the approximate transmission probability, the rigorous bound is proven to be useful.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oberkampf, William Louis; Tucker, W. Troy; Zhang, Jianzhong

    This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.

  2. A Computational Framework to Control Verification and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2010-01-01

    This paper presents a methodology for evaluating the robustness of a controller based on its ability to satisfy the design requirements. The framework proposed is generic since it allows for high-fidelity models, arbitrary control structures and arbitrary functional dependencies between the requirements and the uncertain parameters. The cornerstone of this contribution is the ability to bound the region of the uncertain parameter space where the degradation in closed-loop performance remains acceptable. The size of this bounding set, whose geometry can be prescribed according to deterministic or probabilistic uncertainty models, is a measure of robustness. The robustness metrics proposed herein are the parametric safety margin, the reliability index, the failure probability and upper bounds to this probability. The performance observed at the control verification setting, where the assumptions and approximations used for control design may no longer hold, will fully determine the proposed control assessment.

  3. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  4. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less

  5. High-Density Signal Interface Electromagnetic Radiation Prediction for Electromagnetic Compatibility Evaluation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halligan, Matthew

    Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less

  6. Atmospheric, Long Baseline, and Reactor Neutrino Data Constraints on θ13

    NASA Astrophysics Data System (ADS)

    Roa, J. E.; Latimer, D. C.; Ernst, D. J.

    2009-08-01

    An atmospheric neutrino oscillation tool that uses full three-neutrino oscillation probabilities and a full three-neutrino treatment of the Mikheyev-Smirnov-Wolfenstein effect, together with an analysis of the K2K, MINOS, and CHOOZ data, is used to examine the bounds on θ13. The recent, more finely binned, Super-K atmospheric data are employed. For L/Eν≳104km/GeV, we previously found significant linear in θ13 terms. This analysis finds θ13 bounded from above by the atmospheric data while bounded from below by CHOOZ. The origin of this result arises from data in the previously mentioned very long baseline region; here, matter effects conspire with terms linear in θ13 to produce asymmetric bounds on θ13. Assuming CP conservation, we find θ13=-0.07-0.11+0.18 (90% C.L.).

  7. Atmospheric, long baseline, and reactor neutrino data constraints on theta_{13}.

    PubMed

    Roa, J E; Latimer, D C; Ernst, D J

    2009-08-07

    An atmospheric neutrino oscillation tool that uses full three-neutrino oscillation probabilities and a full three-neutrino treatment of the Mikheyev-Smirnov-Wolfenstein effect, together with an analysis of the K2K, MINOS, and CHOOZ data, is used to examine the bounds on theta_{13}. The recent, more finely binned, Super-K atmospheric data are employed. For L/E_{nu} greater, similar 10;{4} km/GeV, we previously found significant linear in theta_{13} terms. This analysis finds theta_{13} bounded from above by the atmospheric data while bounded from below by CHOOZ. The origin of this result arises from data in the previously mentioned very long baseline region; here, matter effects conspire with terms linear in theta_{13} to produce asymmetric bounds on theta_{13}. Assuming CP conservation, we find theta_{13} = -0.07_{-0.11};{+0.18} (90% C.L.).

  8. Performance analysis of a cascaded coding scheme with interleaved outer code

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1986-01-01

    A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.

  9. WHAMII - An enumeration and insertion procedure with binomial bounds for the stochastic time-constrained traveling salesman problem

    NASA Technical Reports Server (NTRS)

    Dahl, Roy W.; Keating, Karen; Salamone, Daryl J.; Levy, Laurence; Nag, Barindra; Sanborn, Joan A.

    1987-01-01

    This paper presents an algorithm (WHAMII) designed to solve the Artificial Intelligence Design Challenge at the 1987 AIAA Guidance, Navigation and Control Conference. The problem under consideration is a stochastic generalization of the traveling salesman problem in which travel costs can incur a penalty with a given probability. The variability in travel costs leads to a probability constraint with respect to violating the budget allocation. Given the small size of the problem (eleven cities), an approach is considered that combines partial tour enumeration with a heuristic city insertion procedure. For computational efficiency during both the enumeration and insertion procedures, precalculated binomial probabilities are used to determine an upper bound on the actual probability of violating the budget constraint for each tour. The actual probability is calculated for the final best tour, and additional insertions are attempted until the actual probability exceeds the bound.

  10. Water pollution risk associated with natural gas extraction from the Marcellus Shale.

    PubMed

    Rozell, Daniel J; Reaven, Sheldon J

    2012-08-01

    In recent years, shale gas formations have become economically viable through the use of horizontal drilling and hydraulic fracturing. These techniques carry potential environmental risk due to their high water use and substantial risk for water pollution. Using probability bounds analysis, we assessed the likelihood of water contamination from natural gas extraction in the Marcellus Shale. Probability bounds analysis is well suited when data are sparse and parameters highly uncertain. The study model identified five pathways of water contamination: transportation spills, well casing leaks, leaks through fractured rock, drilling site discharge, and wastewater disposal. Probability boxes were generated for each pathway. The potential contamination risk and epistemic uncertainty associated with hydraulic fracturing wastewater disposal was several orders of magnitude larger than the other pathways. Even in a best-case scenario, it was very likely that an individual well would release at least 200 m³ of contaminated fluids. Because the total number of wells in the Marcellus Shale region could range into the tens of thousands, this substantial potential risk suggested that additional steps be taken to reduce the potential for contaminated fluid leaks. To reduce the considerable epistemic uncertainty, more data should be collected on the ability of industrial and municipal wastewater treatment facilities to remove contaminants from used hydraulic fracturing fluid. © 2012 Society for Risk Analysis.

  11. Local approximation of a metapopulation's equilibrium.

    PubMed

    Barbour, A D; McVinish, R; Pollett, P K

    2018-04-18

    We consider the approximation of the equilibrium of a metapopulation model, in which a finite number of patches are randomly distributed over a bounded subset [Formula: see text] of Euclidean space. The approximation is good when a large number of patches contribute to the colonization pressure on any given unoccupied patch, and when the quality of the patches varies little over the length scale determined by the colonization radius. If this is the case, the equilibrium probability of a patch at z being occupied is shown to be close to [Formula: see text], the equilibrium occupation probability in Levins's model, at any point [Formula: see text] not too close to the boundary, if the local colonization pressure and extinction rates appropriate to z are assumed. The approximation is justified by giving explicit upper and lower bounds for the occupation probabilities, expressed in terms of the model parameters. Since the patches are distributed randomly, the occupation probabilities are also random, and we complement our bounds with explicit bounds on the probability that they are satisfied at all patches simultaneously.

  12. Classical Physics and the Bounds of Quantum Correlations.

    PubMed

    Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán

    2016-06-24

    A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.

  13. Impact of Bounded Noise and Rewiring on the Formation and Instability of Spiral Waves in a Small-World Network of Hodgkin-Huxley Neurons.

    PubMed

    Yao, Yuangen; Deng, Haiyou; Ma, Chengzhang; Yi, Ming; Ma, Jun

    2017-01-01

    Spiral waves are observed in the chemical, physical and biological systems, and the emergence of spiral waves in cardiac tissue is linked to some diseases such as heart ventricular fibrillation and epilepsy; thus it has importance in theoretical studies and potential medical applications. Noise is inevitable in neuronal systems and can change the electrical activities of neuron in different ways. Many previous theoretical studies about the impacts of noise on spiral waves focus an unbounded Gaussian noise and even colored noise. In this paper, the impacts of bounded noise and rewiring of network on the formation and instability of spiral waves are discussed in small-world (SW) network of Hodgkin-Huxley (HH) neurons through numerical simulations, and possible statistical analysis will be carried out. Firstly, we present SW network of HH neurons subjected to bounded noise. Then, it is numerically demonstrated that bounded noise with proper intensity σ, amplitude A, or frequency f can facilitate the formation of spiral waves when rewiring probability p is below certain thresholds. In other words, bounded noise-induced resonant behavior can occur in the SW network of neurons. In addition, rewiring probability p always impairs spiral waves, while spiral waves are confirmed to be robust for small p, thus shortcut-induced phase transition of spiral wave with the increase of p is induced. Furthermore, statistical factors of synchronization are calculated to discern the phase transition of spatial pattern, and it is confirmed that larger factor of synchronization is approached with increasing of rewiring probability p, and the stability of spiral wave is destroyed.

  14. Noise, gain, and capture probability of p-type InAs-GaAs quantum-dot and quantum dot-in-well infrared photodetectors

    NASA Astrophysics Data System (ADS)

    Wolde, Seyoum; Lao, Yan-Feng; Unil Perera, A. G.; Zhang, Y. H.; Wang, T. M.; Kim, J. O.; Schuler-Sandy, Ted; Tian, Zhao-Bing; Krishna, S.

    2017-06-01

    We report experimental results showing how the noise in a Quantum-Dot Infrared photodetector (QDIP) and Quantum Dot-in-a-well (DWELL) varies with the electric field and temperature. At lower temperatures (below ˜100 K), the noise current of both types of detectors is dominated by generation-recombination (G-R) noise which is consistent with a mechanism of fluctuations driven by the electric field and thermal noise. The noise gain, capture probability, and carrier life time for bound-to-continuum or quasi-bound transitions in DWELL and QDIP structures are discussed. The capture probability of DWELL is found to be more than two times higher than the corresponding QDIP. Based on the analysis, structural parameters such as the numbers of active layers, the surface density of QDs, and the carrier capture or relaxation rate, type of material, and electric field are some of the optimization parameters identified to improve the gain of devices.

  15. Improved key-rate bounds for practical decoy-state quantum-key-distribution systems

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Zhao, Qi; Razavi, Mohsen; Ma, Xiongfeng

    2017-01-01

    The decoy-state scheme is the most widely implemented quantum-key-distribution protocol in practice. In order to account for the finite-size key effects on the achievable secret key generation rate, a rigorous statistical fluctuation analysis is required. Originally, a heuristic Gaussian-approximation technique was used for this purpose, which, despite its analytical convenience, was not sufficiently rigorous. The fluctuation analysis has recently been made rigorous by using the Chernoff bound. There is a considerable gap, however, between the key-rate bounds obtained from these techniques and that obtained from the Gaussian assumption. Here we develop a tighter bound for the decoy-state method, which yields a smaller failure probability. This improvement results in a higher key rate and increases the maximum distance over which secure key exchange is possible. By optimizing the system parameters, our simulation results show that our method almost closes the gap between the two previously proposed techniques and achieves a performance similar to that of conventional Gaussian approximations.

  16. The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates

    NASA Technical Reports Server (NTRS)

    Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush

    2008-01-01

    We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.

  17. Parabolic transformation cloaks for unbounded and bounded cloaking of matter waves

    NASA Astrophysics Data System (ADS)

    Chang, Yu-Hsuan; Lin, De-Hone

    2014-01-01

    Parabolic quantum cloaks with unbounded and bounded invisible regions are presented with the method of transformation design. The mass parameters of particles for perfect cloaking are shown to be constant along the parabolic coordinate axes of the cloaking shells. The invisibility performance of the cloaks is inspected from the viewpoints of waves and probability currents. The latter shows the controllable characteristic of a probability current by a quantum cloak. It also provides us with a simpler and more efficient way of exhibiting the performance of a quantum cloak without the solutions of the transformed wave equation. Through quantitative analysis of streamline structures in the cloaking shell, one defines the efficiency of the presented quantum cloak in the situation of oblique incidence. The cloaking models presented here give us more choices for testing and applying quantum cloaking.

  18. MaxEnt, second variation, and generalized statistics

    NASA Astrophysics Data System (ADS)

    Plastino, A.; Rocca, M. C.

    2015-10-01

    There are two kinds of Tsallis-probability distributions: heavy tail ones and compact support distributions. We show here, by appeal to functional analysis' tools, that for lower bound Hamiltonians, the second variation's analysis of the entropic functional guarantees that the heavy tail q-distribution constitutes a maximum of Tsallis' entropy. On the other hand, in the compact support instance, a case by case analysis is necessary in order to tackle the issue.

  19. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  20. Tsirelson's bound and supersymmetric entangled states

    PubMed Central

    Borsten, L.; Brádler, K.; Duff, M. J.

    2014-01-01

    A superqubit, belonging to a (2|1)-dimensional super-Hilbert space, constitutes the minimal supersymmetric extension of the conventional qubit. In order to see whether superqubits are more non-local than ordinary qubits, we construct a class of two-superqubit entangled states as a non-local resource in the CHSH game. Since super Hilbert space amplitudes are Grassmann numbers, the result depends on how we extract real probabilities and we examine three choices of map: (1) DeWitt (2) Trigonometric and (3) Modified Rogers. In cases (1) and (2), the winning probability reaches the Tsirelson bound pwin=cos2π/8≃0.8536 of standard quantum mechanics. Case (3) crosses Tsirelson's bound with pwin≃0.9265. Although all states used in the game involve probabilities lying between 0 and 1, case (3) permits other changes of basis inducing negative transition probabilities. PMID:25294964

  1. Validation of the SURE Program, phase 1

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1987-01-01

    Presented are the results of the first phase in the validation of the SURE (Semi-Markov Unreliability Range Evaluator) program. The SURE program gives lower and upper bounds on the death-state probabilities of a semi-Markov model. With these bounds, the reliability of a semi-Markov model of a fault-tolerant computer system can be analyzed. For the first phase in the validation, fifteen semi-Markov models were solved analytically for the exact death-state probabilities and these solutions compared to the corresponding bounds given by SURE. In every case, the SURE bounds covered the exact solution. The bounds, however, had a tendency to separate in cases where the recovery rate was slow or the fault arrival rate was fast.

  2. The condition of a finite Markov chain and perturbation bounds for the limiting probabilities

    NASA Technical Reports Server (NTRS)

    Meyer, C. D., Jr.

    1979-01-01

    The inequalities bounding the relative error the norm of w- w squiggly/the norm of w are exhibited by a very simple function of E and A. Let T denote the transition matrix of an ergodic chain, C, and let A = I - T. Let E be a perturbation matrix such that T squiggly = T - E is also the transition matrix of an ergodic chain, C squiggly. Let w and w squiggly denote the limiting probability (row) vectors for C and C squiggly. The inequality is the best one possible. This bound can be significant in the numerical determination of the limiting probabilities for an ergodic chain. In addition to presenting a sharp bound for the norm of w-w squiggly/the norm of w an explicit expression for w squiggly will be derived in which w squiggly is given as a function of E, A, w and some other related terms.

  3. Uncertainty analysis in fault tree models with dependent basic events.

    PubMed

    Pedroni, Nicola; Zio, Enrico

    2013-06-01

    In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.

  4. Extended Importance Sampling for Reliability Analysis under Evidence Theory

    NASA Astrophysics Data System (ADS)

    Yuan, X. K.; Chen, B.; Zhang, B. Q.

    2018-05-01

    In early engineering practice, the lack of data and information makes uncertainty difficult to deal with. However, evidence theory has been proposed to handle uncertainty with limited information as an alternative way to traditional probability theory. In this contribution, a simulation-based approach, called ‘Extended importance sampling’, is proposed based on evidence theory to handle problems with epistemic uncertainty. The proposed approach stems from the traditional importance sampling for reliability analysis under probability theory, and is developed to handle the problem with epistemic uncertainty. It first introduces a nominal instrumental probability density function (PDF) for every epistemic uncertainty variable, and thus an ‘equivalent’ reliability problem under probability theory is obtained. Then the samples of these variables are generated in a way of importance sampling. Based on these samples, the plausibility and belief (upper and lower bounds of probability) can be estimated. It is more efficient than direct Monte Carlo simulation. Numerical and engineering examples are given to illustrate the efficiency and feasible of the proposed approach.

  5. Individual heterogeneity and identifiability in capture-recapture models

    USGS Publications Warehouse

    Link, W.A.

    2004-01-01

    Individual heterogeneity in detection probabilities is a far more serious problem for capture-recapture modeling than has previously been recognized. In this note, I illustrate that population size is not an identifiable parameter under the general closed population mark-recapture model Mh. The problem of identifiability is obvious if the population includes individuals with pi = 0, but persists even when it is assumed that individual detection probabilities are bounded away from zero. Identifiability may be attained within parametric families of distributions for pi, but not among parametric families of distributions. Consequently, in the presence of individual heterogeneity in detection probability, capture-recapture analysis is strongly model dependent.

  6. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  7. Quantum Dynamical Applications of Salem's Theorem

    NASA Astrophysics Data System (ADS)

    Damanik, David; Del Rio, Rafael

    2009-07-01

    We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.

  8. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  9. Central limit theorem for recurrent random walks on a strip with bounded potential

    NASA Astrophysics Data System (ADS)

    Dolgopyat, D.; Goldsheid, I.

    2018-07-01

    We prove that the recurrent random walk (RW) in random environment (RE) on a strip in bounded potential satisfies the central limit theorem (CLT). The key ingredients of the proof are the analysis of the invariant measure equation and construction of a linearly growing martingale for walks in bounded potential. Our main result implies a complete classification of recurrent i.i.d. RWRE on the strip. Namely the walk either exhibits the Sinai behaviour in the sense that converges, as , to a (random) limit (the Sinai law) or, it satisfies the CLT. Another application of our main result is the CLT for the quasiperiodic environments with Diophantine frequencies in the recurrent case. We complement this result by proving that in the transient case the CLT holds for all uniquely ergodic environments. We also investigate the algebraic structure of the environments satisfying the CLT. In particular, we show that there exists a collection of proper algebraic subvarieties in the space of transition probabilities, such that: • If RE is stationary and ergodic and the transition probabilities are con-centrated on one of subvarieties from our collection then the CLT holds. • If the environment is i.i.d then the above condition is also necessary forthe CLT. All these results are valid for one-dimensional RWRE with bounded jumps as a particular case of the strip model.

  10. Bounding the first exit from the basin: Independence times and finite-time basin stability

    NASA Astrophysics Data System (ADS)

    Schultz, Paul; Hellmann, Frank; Webster, Kevin N.; Kurths, Jürgen

    2018-04-01

    We study the stability of deterministic systems, given sequences of large, jump-like perturbations. Our main result is the derivation of a lower bound for the probability of the system to remain in the basin, given that perturbations are rare enough. This bound is efficient to evaluate numerically. To quantify rare enough, we define the notion of the independence time of such a system. This is the time after which a perturbed state has probably returned close to the attractor, meaning that subsequent perturbations can be considered separately. The effect of jump-like perturbations that occur at least the independence time apart is thus well described by a fixed probability to exit the basin at each jump, allowing us to obtain the bound. To determine the independence time, we introduce the concept of finite-time basin stability, which corresponds to the probability that a perturbed trajectory returns to an attractor within a given time. The independence time can then be determined as the time scale at which the finite-time basin stability reaches its asymptotic value. Besides that, finite-time basin stability is a novel probabilistic stability measure on its own, with potential broad applications in complex systems.

  11. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: II. Probabilistic Guarantees on Constraint Satisfaction

    PubMed Central

    Li, Zukui; Floudas, Christodoulos A.

    2012-01-01

    Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868

  12. The Tightness of the Kesten-Stigum Reconstruction Bound of Symmetric Model with Multiple Mutations

    NASA Astrophysics Data System (ADS)

    Liu, Wenjian; Jammalamadaka, Sreenivasa Rao; Ning, Ning

    2018-02-01

    It is well known that reconstruction problems, as the interdisciplinary subject, have been studied in numerous contexts including statistical physics, information theory and computational biology, to name a few. We consider a 2 q-state symmetric model, with two categories of q states in each category, and 3 transition probabilities: the probability to remain in the same state, the probability to change states but remain in the same category, and the probability to change categories. We construct a nonlinear second-order dynamical system based on this model and show that the Kesten-Stigum reconstruction bound is not tight when q ≥ 4.

  13. A comparison of error bounds for a nonlinear tracking system with detection probability Pd < 1.

    PubMed

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-12-14

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds.

  14. A Comparison of Error Bounds for a Nonlinear Tracking System with Detection Probability Pd < 1

    PubMed Central

    Tong, Huisi; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    2012-01-01

    Error bounds for nonlinear filtering are very important for performance evaluation and sensor management. This paper presents a comparative study of three error bounds for tracking filtering, when the detection probability is less than unity. One of these bounds is the random finite set (RFS) bound, which is deduced within the framework of finite set statistics. The others, which are the information reduction factor (IRF) posterior Cramer-Rao lower bound (PCRLB) and enumeration method (ENUM) PCRLB are introduced within the framework of finite vector statistics. In this paper, we deduce two propositions and prove that the RFS bound is equal to the ENUM PCRLB, while it is tighter than the IRF PCRLB, when the target exists from the beginning to the end. Considering the disappearance of existing targets and the appearance of new targets, the RFS bound is tighter than both IRF PCRLB and ENUM PCRLB with time, by introducing the uncertainty of target existence. The theory is illustrated by two nonlinear tracking applications: ballistic object tracking and bearings-only tracking. The simulation studies confirm the theory and reveal the relationship among the three bounds. PMID:23242274

  15. Technical notes and correspondence: Stochastic robustness of linear time-invariant control systems

    NASA Technical Reports Server (NTRS)

    Stengel, Robert F.; Ray, Laura R.

    1991-01-01

    A simple numerical procedure for estimating the stochastic robustness of a linear time-invariant system is described. Monte Carlo evaluations of the system's eigenvalues allows the probability of instability and the related stochastic root locus to be estimated. This analysis approach treats not only Gaussian parameter uncertainties but non-Gaussian cases, including uncertain-but-bounded variation. Confidence intervals for the scalar probability of instability address computational issues inherent in Monte Carlo simulation. Trivial extensions of the procedure admit consideration of alternate discriminants; thus, the probabilities that stipulated degrees of instability will be exceeded or that closed-loop roots will leave desirable regions can also be estimated. Results are particularly amenable to graphical presentation.

  16. Probable systemic lupus erythematosus with cell-bound complement activation products (CB-CAPS).

    PubMed

    Lamichhane, D; Weinstein, A

    2016-08-01

    Complement activation is a key feature of systemic lupus erythematosus (SLE). Detection of cell-bound complement activation products (CB-CAPS) occurs more frequently than serum hypocomplementemia in definite lupus. We describe a patient with normocomplementemic probable SLE who did not fulfill ACR classification criteria for lupus, but the diagnosis was supported by the presence of CB-CAPS. © The Author(s) 2016.

  17. Bound state and localization of excitation in many-body open systems

    NASA Astrophysics Data System (ADS)

    Cui, H. T.; Shen, H. Z.; Hou, S. C.; Yi, X. X.

    2018-04-01

    We study the exact bound state and time evolution for single excitations in one-dimensional X X Z spin chains within a non-Markovian reservoir. For the bound state, a common feature is the localization of single excitations, which means the spontaneous emission of excitations into the reservoir is prohibited. Exceptionally, the pseudo-bound state can be found, for which the single excitation has a finite probability of emission into the reservoir. In addition, a critical energy scale for bound states is also identified, below which only one bound state exists, and it is also the pseudo-bound state. The effect of quasirandom disorder in the spin chain is also discussed; such disorder induces the single excitation to locate at some spin sites. Furthermore, to display the effect of bound state and disorder on the preservation of quantum information, the time evolution of single excitations in spin chains is studied exactly. An interesting observation is that the excitation can stay at its initial location with high probability only when the bound state and disorder coexist. In contrast, when either one of them is absent, the information of the initial state can be erased completely or becomes mixed. This finding shows that the combination of bound state and disorder can provide an ideal mechanism for quantum memory.

  18. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.

  19. Delay Analysis and Optimization of Bandwidth Request under Unicast Polling in IEEE 802.16e over Gilbert-Elliot Error Channel

    NASA Astrophysics Data System (ADS)

    Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae

    In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.

  20. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  1. Abort Trigger False Positive and False Negative Analysis Methodology for Threshold-Based Abort Detection

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon

    2015-01-01

    This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.

  2. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  3. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  4. Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  5. Finding Bounded Rational Equilibria. Part 1; Iterative Focusing

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights from the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  6. Temporal analysis of nonresonant two-photon coherent control involving bound and dissociative molecular states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su Jing; Chen Shaohao; Jaron-Becker, Agnieszka

    We theoretically study the control of two-photon excitation to bound and dissociative states in a molecule induced by trains of laser pulses, which are equivalent to certain sets of spectral phase modulated pulses. To this end, we solve the time-dependent Schroedinger equation for the interaction of molecular model systems with an external intense laser field. Our numerical results for the temporal evolution of the population in the excited states show that, in the case of an excited dissociative state, control schemes, previously validated for the atomic case, fail due to the coupling of electronic and nuclear motion. In contrast, formore » excitation to bound states the two-photon excitation probability is controlled via the time delay and the carrier-envelope phase difference between two consecutive pulses in the train.« less

  7. Upper bounds on the error probabilities and asymptotic error exponents in quantum multiple state discrimination

    NASA Astrophysics Data System (ADS)

    Audenaert, Koenraad M. R.; Mosonyi, Milán

    2014-10-01

    We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j

  8. Impact of jammer side information on the performance of anti-jam systems

    NASA Astrophysics Data System (ADS)

    Lim, Samuel

    1992-03-01

    The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.

  9. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments.

    PubMed

    Van Nguyen, Binh; Kim, Kiseon

    2016-09-11

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations.

  10. Analysis of synchronous digital-modulation schemes for satellite communication

    NASA Technical Reports Server (NTRS)

    Takhar, G. S.; Gupta, S. C.

    1975-01-01

    The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.

  11. Solving the chemical master equation using sliding windows

    PubMed Central

    2010-01-01

    Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. PMID:20377904

  12. Bounds on Block Error Probability for Multilevel Concatenated Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana

    1996-01-01

    Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.

  13. Improved statistical fluctuation analysis for measurement-device-independent quantum key distribution with four-intensity decoy-state method.

    PubMed

    Mao, Chen-Chen; Zhou, Xing-Yu; Zhu, Jian-Rong; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin

    2018-05-14

    Recently Zhang et al [ Phys. Rev. A95, 012333 (2017)] developed a new approach to estimate the failure probability for the decoy-state BB84 QKD system when taking finite-size key effect into account, which offers security comparable to Chernoff bound, while results in an improved key rate and transmission distance. Based on Zhang et al's work, now we extend this approach to the case of the measurement-device-independent quantum key distribution (MDI-QKD), and for the first time implement it onto the four-intensity decoy-state MDI-QKD system. Moreover, through utilizing joint constraints and collective error-estimation techniques, we can obviously increase the performance of practical MDI-QKD systems compared with either three- or four-intensity decoy-state MDI-QKD using Chernoff bound analysis, and achieve much higher level security compared with those applying Gaussian approximation analysis.

  14. Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution

    ERIC Educational Resources Information Center

    Verkuilen, Jay; Smithson, Michael

    2012-01-01

    Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…

  15. More on the decoder error probability for Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.

    1987-01-01

    The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.

  16. Finding Bounded Rational Equilibria. Part 2; Alternative Lagrangians and Uncountable Move Spaces

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality characterizing all real-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. It has recently been shown that the same information theoretic mathematical structure, known as Probability Collectives (PC) underlies both issues. This relationship between statistical physics and game theory allows techniques and insights &om the one field to be applied to the other. In particular, PC provides a formal model-independent definition of the degree of rationality of a player and of bounded rationality equilibria. This pair of papers extends previous work on PC by introducing new computational approaches to effectively find bounded rationality equilibria of common-interest (team) games.

  17. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1988-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describes the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  18. Comparing hard and soft prior bounds in geophysical inverse problems

    NASA Technical Reports Server (NTRS)

    Backus, George E.

    1987-01-01

    In linear inversion of a finite-dimensional data vector y to estimate a finite-dimensional prediction vector z, prior information about X sub E is essential if y is to supply useful limits for z. The one exception occurs when all the prediction functionals are linear combinations of the data functionals. Two forms of prior information are compared: a soft bound on X sub E is a probability distribution p sub x on X which describeds the observer's opinion about where X sub E is likely to be in X; a hard bound on X sub E is an inequality Q sub x(X sub E, X sub E) is equal to or less than 1, where Q sub x is a positive definite quadratic form on X. A hard bound Q sub x can be softened to many different probability distributions p sub x, but all these p sub x's carry much new information about X sub E which is absent from Q sub x, and some information which contradicts Q sub x. Both stochastic inversion (SI) and Bayesian inference (BI) estimate z from y and a soft prior bound p sub x. If that probability distribution was obtained by softening a hard prior bound Q sub x, rather than by objective statistical inference independent of y, then p sub x contains so much unsupported new information absent from Q sub x that conclusions about z obtained with SI or BI would seen to be suspect.

  19. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  20. A Multi-Armed Bandit Approach to Following a Markov Chain

    DTIC Science & Technology

    2017-06-01

    focus on the House to Café transition (p1,4). We develop a Multi-Armed Bandit approach for efficiently following this target, where each state takes the...and longitude (each state corresponding to a physical location and a small set of activities). The searcher would then apply our approach on this...the target’s transition probability and the true probability over time. Further, we seek to provide upper bounds (i.e., worst case bounds) on the

  1. Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph

    NASA Astrophysics Data System (ADS)

    Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon

    2013-10-01

    We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.

  2. Diffuse reflection from a stochastically bounded, semi-infinite medium

    NASA Technical Reports Server (NTRS)

    Lumme, K.; Peltoniemi, J. I.; Irvine, W. M.

    1990-01-01

    In order to determine the diffuse reflection from a medium bounded by a rough surface, the problem of radiative transfer in a boundary layer characterized by a statistical distribution of heights is considered. For the case that the surface is defined by a multivariate normal probability density, the propagation probability for rays traversing the boundary layer is derived and, from that probability, a corresponding radiative transfer equation. A solution of the Eddington (two stream) type is found explicitly, and examples are given. The results should be applicable to reflection from the regoliths of solar system bodies, as well as from a rough ocean surface.

  3. Fluorescent characteristics of estrogenic compounds in landfill leachate.

    PubMed

    Zhanga, Hua; Changb, Cheng-Hsuan; Lü, Fan; Su, Ay; Lee, Duu-Jong; He, Pin-Jing; Shao, Li-Ming

    2009-08-01

    Estrogens in landfill leachate could probably contaminate receiving water sources if not properly polished before discharge. This work measured, using an estrogen receptor-alpha competitor screening assay, the estrogenic potentials of leachate samples collected at a local sanitary landfill in Shanghai, China and their compounds fractionated by molecular weights. The chemical structures of the constituent compounds were characterized using fluorescence excitation and emission matrix (EEM). The organic matters of molecular weight <600 Da and of 3000-14,000 Da contributed most of the estrogenic potentials of the raw leachates. The former were considered as the typical endocrine disrupting compounds in dissolved state; while the latter the fulvic acids with high aromaticity that were readily adsorbed with estrogens (bound state). Statistical analysis on EEM peaks revealed that the chemical structures of noted estrogens in dissolved state and in bound state were not identical. Aerobic treatment effectively removed dissolved estrogens, but rarely removed those bound estrogens.

  4. An Upper Bound on High Speed Satellite Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.

  5. Estimation of the lower and upper bounds on the probability of failure using subset simulation and random set theory

    NASA Astrophysics Data System (ADS)

    Alvarez, Diego A.; Uribe, Felipe; Hurtado, Jorge E.

    2018-02-01

    Random set theory is a general framework which comprises uncertainty in the form of probability boxes, possibility distributions, cumulative distribution functions, Dempster-Shafer structures or intervals; in addition, the dependence between the input variables can be expressed using copulas. In this paper, the lower and upper bounds on the probability of failure are calculated by means of random set theory. In order to accelerate the calculation, a well-known and efficient probability-based reliability method known as subset simulation is employed. This method is especially useful for finding small failure probabilities in both low- and high-dimensional spaces, disjoint failure domains and nonlinear limit state functions. The proposed methodology represents a drastic reduction of the computational labor implied by plain Monte Carlo simulation for problems defined with a mixture of representations for the input variables, while delivering similar results. Numerical examples illustrate the efficiency of the proposed approach.

  6. Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model

    NASA Technical Reports Server (NTRS)

    Vallejo, Jonathon; Hejduk, Matt; Stamey, James

    2015-01-01

    We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.

  7. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  8. Sparse Learning with Stochastic Composite Optimization.

    PubMed

    Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei

    2017-06-01

    In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).

  9. Universality of the Volume Bound in Slow-Roll Eternal Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubovsky, Sergei; Senatore, Leonardo; Villadoro, Giovanni

    2012-03-28

    It has recently been shown that in single field slow-roll inflation the total volume cannot grow by a factor larger than e{sup S{sub dS}/2} without becoming infinite. The bound is saturated exactly at the phase transition to eternal inflation where the probability to produce infinite volume becomes non zero. We show that the bound holds sharply also in any space-time dimensions, when arbitrary higher-dimensional operators are included and in the multi-field inflationary case. The relation with the entropy of de Sitter and the universality of the bound strengthen the case for a deeper holographic interpretation. As a spin-off we providemore » the formalism to compute the probability distribution of the volume after inflation for generic multi-field models, which might help to address questions about the population of vacua of the landscape during slow-roll inflation.« less

  10. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  11. Potential accuracy of translation estimation between radar and optical images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2015-10-01

    This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér-Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration "lock" and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.

  12. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2016-06-01

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximate algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.

  13. Accelerating rejection-based simulation of biochemical reactions with bounded acceptance probability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento

    Stochastic simulation of large biochemical reaction networks is often computationally expensive due to the disparate reaction rates and high variability of population of chemical species. An approach to accelerate the simulation is to allow multiple reaction firings before performing update by assuming that reaction propensities are changing of a negligible amount during a time interval. Species with small population in the firings of fast reactions significantly affect both performance and accuracy of this simulation approach. It is even worse when these small population species are involved in a large number of reactions. We present in this paper a new approximatemore » algorithm to cope with this problem. It is based on bounding the acceptance probability of a reaction selected by the exact rejection-based simulation algorithm, which employs propensity bounds of reactions and the rejection-based mechanism to select next reaction firings. The reaction is ensured to be selected to fire with an acceptance rate greater than a predefined probability in which the selection becomes exact if the probability is set to one. Our new algorithm improves the computational cost for selecting the next reaction firing and reduces the updating the propensities of reactions.« less

  14. An engineering rock classification to evaluate seismic rock-fall susceptibility and its application to the Wasatch Front

    USGS Publications Warehouse

    Harp, E.L.; Noble, M.A.

    1993-01-01

    Investigations of earthquakes world wide show that rock falls are the most abundant type of landslide that is triggered by earthquakes. An engineering classification originally used in tunnel design, known as the rock mass quality designation (Q), was modified for use in rating the susceptibility of rock slopes to seismically-induced failure. Analysis of rock-fall concentrations and Q-values for the 1980 earthquake sequence near Mammoth Lakes, California, defines a well-constrained upper bound that shows the number of rock falls per site decreases rapidly with increasing Q. Because of the similarities of lithology and slope between the Eastern Sierra Nevada Range near Mammoth Lakes and the Wasatch Front near Salt Lake City, Utah, the probabilities derived from analysis of the Mammoth Lakes region were used to predict rock-fall probabilities for rock slopes near Salt Lake City in response to a magnitude 6.0 earthquake. These predicted probabilities were then used to generalize zones of rock-fall susceptibility. -from Authors

  15. Performance Analysis of Amplify-and-Forward Systems with Single Relay Selection in Correlated Environments

    PubMed Central

    Nguyen, Binh Van; Kim, Kiseon

    2016-01-01

    In this paper, we consider amplify-and-forward (AnF) cooperative systems under correlated fading environments. We first present a brief overview of existing works on the effect of channel correlations on the system performance. We then focus on our main contribution which is analyzing the outage probability of a multi-AnF-relay system with the best relay selection (BRS) scheme under a condition that two channels of each relay, source-relay and relay-destination channels, are correlated. Using lower and upper bounds on the end-to-end received signal-to-noise ratio (SNR) at the destination, we derive corresponding upper and lower bounds on the system outage probability. We prove that the system can achieve a diversity order (DO) equal to the number of relays. In addition, and importantly, we show that the considered correlation form has a constructive effect on the system performance. In other words, the larger the correlation coefficient, the better system performance. Our analytic results are corroborated by extensive Monte-Carlo simulations. PMID:27626426

  16. Statistical plant set estimation using Schroeder-phased multisinusoidal input design

    NASA Technical Reports Server (NTRS)

    Bayard, D. S.

    1992-01-01

    A frequency domain method is developed for plant set estimation. The estimation of a plant 'set' rather than a point estimate is required to support many methods of modern robust control design. The approach here is based on using a Schroeder-phased multisinusoid input design which has the special property of placing input energy only at the discrete frequency points used in the computation. A detailed analysis of the statistical properties of the frequency domain estimator is given, leading to exact expressions for the probability distribution of the estimation error, and many important properties. It is shown that, for any nominal parametric plant estimate, one can use these results to construct an overbound on the additive uncertainty to any prescribed statistical confidence. The 'soft' bound thus obtained can be used to replace 'hard' bounds presently used in many robust control analysis and synthesis methods.

  17. Percolation

    NASA Astrophysics Data System (ADS)

    Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.

    2004-10-01

    Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.

  18. Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment.

    ERIC Educational Resources Information Center

    Tversky, Amos; Kahneman, Daniel

    1983-01-01

    Judgments under uncertainty are often mediated by intuitive heuristics that are not bound by the conjunction rule of probability. Representativeness and availability heuristics can make a conjunction appear more probable than one of its constituents. Alternative interpretations of this conjunction fallacy are discussed and attempts to combat it…

  19. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  20. Entropy-Based Bounds On Redundancies Of Huffman Codes

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.

    1992-01-01

    Report presents extension of theory of redundancy of binary prefix code of Huffman type which includes derivation of variety of bounds expressed in terms of entropy of source and size of alphabet. Recent developments yielded bounds on redundancy of Huffman code in terms of probabilities of various components in source alphabet. In practice, redundancies of optimal prefix codes often closer to 0 than to 1.

  1. Sculpture, general view looking to the seated lions, probably from ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Sculpture, general view looking to the seated lions, probably from the American Bungalow - National Park Seminary, Bounded by Capitol Beltway (I-495), Linden Lane, Woodstove Avenue, & Smith Drive, Silver Spring, Montgomery County, MD

  2. Error control techniques for satellite and space communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.

    1990-01-01

    An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.

  3. Permanence analysis of a concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.; Kasami, T.

    1983-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.

  4. Decay Rates and Probability Estimatesfor Massive Dirac Particlesin the Kerr-Newman Black Hole Geometry

    NASA Astrophysics Data System (ADS)

    Finster, F.; Kamran, N.; Smoller, J.; Yau, S.-T.

    The Cauchy problem is considered for the massive Dirac equation in the non-extreme Kerr-Newman geometry, for smooth initial data with compact support outside the event horizon and bounded angular momentum. We prove that the Dirac wave function decays in L∞ {loc} at least at the rate t-5/6. For generic initial data, this rate of decay is sharp. We derive a formula for the probability p that the Dirac particle escapes to infinity. For various conditions on the initial data, we show that p = 0, 1 or 0 < p < 1. The proofs are based on a refined analysis of the Dirac propagator constructed in [4].

  5. Time Dependence of Collision Probabilities During Satellite Conjunctions

    NASA Technical Reports Server (NTRS)

    Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.

    2017-01-01

    The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.

  6. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  7. Bounds on stochastic chemical kinetic systems at steady state

    NASA Astrophysics Data System (ADS)

    Dowdy, Garrett R.; Barton, Paul I.

    2018-02-01

    The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.

  8. What Information Theory Says about Bounded Rational Best Response

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    Probability Collectives (PC) provides the information-theoretic extension of conventional full-rationality game theory to bounded rational games. Here an explicit solution to the equations giving the bounded rationality equilibrium of a game is presented. Then PC is used to investigate games in which the players use bounded rational best-response strategies. Next it is shown that in the continuum-time limit, bounded rational best response games result in a variant of the replicator dynamics of evolutionary game theory. It is then shown that for team (shared-payoff) games, this variant of replicator dynamics is identical to Newton-Raphson iterative optimization of the shared utility function.

  9. Better bounds on optimal measurement and entanglement recovery, with applications to uncertainty and monogamy relations

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2017-10-01

    We extend the recent bounds of Sason and Verdú relating Rényi entropy and Bayesian hypothesis testing (arXiv:1701.01974.) to the quantum domain and show that they have a number of different applications. First, we obtain a sharper bound relating the optimal probability of correctly distinguishing elements of an ensemble of states to that of the pretty good measurement, and an analogous bound for optimal and pretty good entanglement recovery. Second, we obtain bounds relating optimal guessing and entanglement recovery to the fidelity of the state with a product state, which then leads to tight tripartite uncertainty and monogamy relations.

  10. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  11. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  12. Probabilistic Analysis of Algorithms for NP-Complete Problems

    DTIC Science & Technology

    1989-09-29

    LASSIFICATION OF THIS PAGE DTIC FILE COPY i PO ATO PAGEm ’ Forn Approvedii IONO PAGE I iMB NO. 07040188 .... "....... b . RESTRICTIVE MARKINGSECTE D...0790 3. DISTRIBUTION IAVAILABILITY OF REPORTAD-A217 880 -- ApprvdnrPU1l Qroo; B distr’ibutil unli mit od. .... .S. MONITORING...efficiently solves P in bouncded probability under D. I1 b ) A finds a solution to an instance of P chosen randomly according to D in time bounded by a

  13. Linking of uniform random polygons in confined spaces

    NASA Astrophysics Data System (ADS)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Karadayi, E.; Saito, M.

    2007-03-01

    In this paper, we study the topological entanglement of uniform random polygons in a confined space. We derive the formula for the mean squared linking number of such polygons. For a fixed simple closed curve in the confined space, we rigorously show that the linking probability between this curve and a uniform random polygon of n vertices is at least 1-O\\big(\\frac{1}{\\sqrt{n}}\\big) . Our numerical study also indicates that the linking probability between two uniform random polygons (in a confined space), of m and n vertices respectively, is bounded below by 1-O\\big(\\frac{1}{\\sqrt{mn}}\\big) . In particular, the linking probability between two uniform random polygons, both of n vertices, is bounded below by 1-O\\big(\\frac{1}{n}\\big) .

  14. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  15. Effect of nuclear-reaction mechanisms on the population of excited nuclear states and isomeric ratios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skobelev, N. K., E-mail: skobelev@jinr.ru

    2016-07-15

    Experimental data on the cross sections for channels of fusion and transfer reactions induced by beams of radioactive halo nuclei and clustered and stable loosely bound nuclei were analyzed, and the results of this analysis were summarized. The interplay of the excitation of single-particle states in reaction-product nuclei and direct reaction channels was established for transfer reactions. Respective experiments were performed in stable ({sup 6}Li) and radioactive ({sup 6}He) beams of the DRIBs accelerator complex at the Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research, and in deuteron and {sup 3}He beams of the U-120M cyclotron at themore » Nuclear Physics Institute, Academy Sciences of Czech Republic (Řež and Prague, Czech Republic). Data on subbarrier and near-barrier fusion reactions involving clustered and loosely bound light nuclei ({sup 6}Li and {sup 3}He) can be described quite reliably within simple evaporation models with allowance for different reaction Q-values and couple channels. In reactions involving halo nuclei, their structure manifests itself most strongly in the region of energies below the Coulomb barrier. Neutron transfer occurs with a high probability in the interactions of all loosely bound nuclei with light and heavy stable nuclei at positive Q-values. The cross sections for such reactions and the respective isomeric ratios differ drastically for nucleon stripping and nucleon pickup mechanisms. This is due to the difference in the population probabilities for excited single-particle states.« less

  16. Structural reliability analysis under evidence theory using the active learning kriging model

    NASA Astrophysics Data System (ADS)

    Yang, Xufeng; Liu, Yongshou; Ma, Panke

    2017-11-01

    Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.

  17. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  18. Performance analysis of multiple PRF technique for ambiguity resolution

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    For short wavelength spaceborne synthetic aperture radar (SAR), ambiguity in Doppler centroid estimation occurs when the azimuth squint angle uncertainty is larger than the azimuth antenna beamwidth. Multiple pulse recurrence frequency (PRF) hopping is a technique developed to resolve the ambiguity by operating the radar in different PRF's in the pre-imaging sequence. Performance analysis results of the multiple PRF technique are presented, given the constraints of the attitude bound, the drift rate uncertainty, and the arbitrary numerical values of PRF's. The algorithm performance is derived in terms of the probability of correct ambiguity resolution. Examples, using the Shuttle Imaging Radar-C (SIR-C) and X-SAR parameters, demonstrate that the probability of correct ambiguity resolution obtained by the multiple PRF technique is greater than 95 percent and 80 percent for the SIR-C and X-SAR applications, respectively. The success rate is significantly higher than that achieved by the range cross correlation technique.

  19. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  20. Adversarial risk analysis with incomplete information: a level-k approach.

    PubMed

    Rothschild, Casey; McLay, Laura; Guikema, Seth

    2012-07-01

    This article proposes, develops, and illustrates the application of level-k game theory to adversarial risk analysis. Level-k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend-attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack. © 2011 Society for Risk Analysis.

  1. Mutual Information Rate and Bounds for It

    PubMed Central

    Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso

    2012-01-01

    The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809

  2. A new variable interval schedule with constant hazard rate and finite time range.

    PubMed

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  3. Birnessite-induced binding of phenolic monomers to soil humic substances and nature of the bound residues.

    PubMed

    Li, Chengliang; Zhang, Bin; Ertunc, Tanya; Schaeffer, Andreas; Ji, Rong

    2012-08-21

    The nature of the abiotic birnessite (δ-MnO(2))-catalyzed transformation products of phenolic compounds in the presence of soil organic matter is crucial for understanding the fate and stability of ubiquitous phenolic carbon in the environment. (14)C-radioactive and (13)C-stable-isotope tracers were used to study the mineralization and transformation by δ-MnO(2) of two typical humus and lignin phenolic monomers--catechol and p-coumaric acid--in the presence and absence of agricultural and forest soil humic acids (HAs) at pH 5-8. Mineralization decreased with increasing solution pH, and catechol was markedly more mineralized than p-coumaric acid. In the presence of HAs, the mineralization was strongly reduced, and considerable amounts of phenolic residues were bound to the HAs, independent of the solution pH. The HA-bound residues were homogeneously distributed within the humic molecules, and most still contained the unchanged aromatic ring as revealed by (13)C NMR analysis, indicating that the residues were probably bound via ester or ether bonds. The study provides important information on δ-MnO(2) stimulation of phenolic carbon binding to humic substances and the molecular distribution and chemical structure of the bound residues, which is essential for understanding the environmental fates of both naturally occurring and anthropogenic phenolic compounds.

  4. Probabilistic metrology or how some measurement outcomes render ultra-precise estimates

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.

    2016-10-01

    We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.

  5. Some New Twists to Problems Involving the Gaussian Probability Integral

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1997-01-01

    Using an alternate form of the Gaussian probability integral discovered a number of years ago, it is shown that the solution to a number of previously considered communication problems can be simplified and in some cases made more accurate(i.e., exact rather than bounded).

  6. Quantum state discrimination bounds for finite sample size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111

    2012-12-15

    In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less

  7. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    ERIC Educational Resources Information Center

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  8. Survival analysis of the high energy channel of BATSE

    NASA Astrophysics Data System (ADS)

    Balázs, L. G.; Bagoly, Z.; Horváth, I.; Mészáros, A.

    2004-06-01

    We used Kaplan-Meier (KM) survival analysis to study the true distribution of high energy (F4) fluences on BATSE. The measured values were divided into two classes: A. if F4 exceeded the 3σ of the noise level we accepted the measured value as 'true event'. B. We treated 3σ as an upper bound if F4 did not exceeded it and identified those data as 'censored'. KM analysis were made for short (t90 < 2 s) and long (t90 > 2 s) bursts, separately. Comparison of the calculated probability distribution functions of the two groups indicated about an order of magnitude difference in the > 300 keV part of the energies released.

  9. The force distribution probability function for simple fluids by density functional theory.

    PubMed

    Rickayzen, G; Heyes, D M

    2013-02-28

    Classical density functional theory (DFT) is used to derive a formula for the probability density distribution function, P(F), and probability distribution function, W(F), for simple fluids, where F is the net force on a particle. The final formula for P(F) ∝ exp(-AF(2)), where A depends on the fluid density, the temperature, and the Fourier transform of the pair potential. The form of the DFT theory used is only applicable to bounded potential fluids. When combined with the hypernetted chain closure of the Ornstein-Zernike equation, the DFT theory for W(F) agrees with molecular dynamics computer simulations for the Gaussian and bounded soft sphere at high density. The Gaussian form for P(F) is still accurate at lower densities (but not too low density) for the two potentials, but with a smaller value for the constant, A, than that predicted by the DFT theory.

  10. Upper bounds on sequential decoding performance parameters

    NASA Technical Reports Server (NTRS)

    Jelinek, F.

    1974-01-01

    This paper presents the best obtainable random coding and expurgated upper bounds on the probabilities of undetectable error, of t-order failure (advance to depth t into an incorrect subset), and of likelihood rise in the incorrect subset, applicable to sequential decoding when the metric bias G is arbitrary. Upper bounds on the Pareto exponent are also presented. The G-values optimizing each of the parameters of interest are determined, and are shown to lie in intervals that in general have nonzero widths. The G-optimal expurgated bound on undetectable error is shown to agree with that for maximum likelihood decoding of convolutional codes, and that on failure agrees with the block code expurgated bound. Included are curves evaluating the bounds for interesting choices of G and SNR for a binary-input quantized-output Gaussian additive noise channel.

  11. Sudden Relaminarization and Lifetimes in Forced Isotropic Turbulence.

    PubMed

    Linkmann, Moritz F; Morozov, Alexander

    2015-09-25

    We demonstrate an unexpected connection between isotropic turbulence and wall-bounded shear flows. We perform direct numerical simulations of isotropic turbulence forced at large scales at moderate Reynolds numbers and observe sudden transitions from a chaotic dynamics to a spatially simple flow, analogous to the laminar state in wall bounded shear flows. We find that the survival probabilities of turbulence are exponential and the typical lifetimes increase superexponentially with the Reynolds number. Our results suggest that both isotropic turbulence and wall-bounded shear flows qualitatively share the same phase-space dynamics.

  12. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  13. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  14. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  15. Seasonal variations and source apportionment of atmospheric PM2.5-bound polycyclic aromatic hydrocarbons in a mixed multi-function area of Hangzhou, China.

    PubMed

    Lu, Hao; Wang, Shengsheng; Li, Yun; Gong, Hui; Han, Jingyi; Wu, Zuliang; Yao, Shuiliang; Zhang, Xuming; Tang, Xiujuan; Jiang, Boqiong

    2017-07-01

    To reveal the seasonal variations and sources of PM 2.5 -bound polycyclic aromatic hydrocarbons (PAHs) during haze and non-haze episodes, daily PM 2.5 samples were collected from March 2015 to February 2016 in a mixed multi-function area in Hangzhou, China. Ambient concentrations of 16 priority-controlled PAHs were determined. The sums of PM 2.5 -bound PAH concentrations during the haze episodes were 4.52 ± 3.32 and 13.6 ± 6.29 ng m -3 in warm and cold seasons, respectively, which were 1.99 and 1.49 times those during the non-haze episodes. Four PAH sources were identified using the positive matrix factorization model and conditional probability function, which were vehicular emissions (45%), heavy oil combustion (23%), coal and natural gas combustion (22%), and biomass combustion (10%). The four source concentrations of PAHs consistently showed higher levels in the cold season, compared with those in the warm season. Vehicular emissions were the most considerable sources that result in the increase of PM 2.5 -bound PAH levels during the haze episodes, and heavy oil combustion played an important role in the aggravation of haze pollution. The analysis of air mass back trajectories indicated that air mass transport had an influence on the PM 2.5 -bound PAH pollution, especially on the increased contributions from coal combustion and vehicular emissions in the cold season.

  16. Comparative analysis of solid-state bioprocessing and enzymatic treatment of finger millet for mobilization of bound phenolics.

    PubMed

    Yadav, Geetanjali; Singh, Anshu; Bhattacharya, Patrali; Yuvraj, Jude; Banerjee, Rintu

    2013-11-01

    The present work investigates the probable bioprocessing technique to mobilize the bound phenolics naturally found in finger millet cell wall for enriching it with dietary antioxidants. Comparative study was performed between the exogenous enzymatic treatment and solid-state fermentation of grain (SSF) with a food grade organism Rhizopus oryzae. SSF results indicated that at the 6th day of incubation, total phenolic content (18.64 mg gallic acid equivalent/gds) and antioxidant property (DPPH radical scavenging activity of 39.03 %, metal chelating ability of 54 % and better reducing power) of finger millet were drastically enhanced when fermented with GRAS filamentous fungi. During the enzymatic bioprocessing, most of the phenolics released during the hydrolysis, leached out into the liquid portion rather than retaining them within the millet grain, resulting in overall loss of dietary antioxidant. The present study establishes the most effective strategy to enrich the finger millet with phenolic antioxidants.

  17. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  18. Bound entangled states with a private key and their classical counterpart.

    PubMed

    Ozols, Maris; Smith, Graeme; Smolin, John A

    2014-03-21

    Entanglement is a fundamental resource for quantum information processing. In its pure form, it allows quantum teleportation and sharing classical secrets. Realistic quantum states are noisy and their usefulness is only partially understood. Bound-entangled states are central to this question--they have no distillable entanglement, yet sometimes still have a private classical key. We present a construction of bound-entangled states with a private key based on classical probability distributions. From this emerge states possessing a new classical analogue of bound entanglement, distinct from the long-sought bound information. We also find states of smaller dimensions and higher key rates than previously known. Our construction has implications for classical cryptography: we show that existing protocols are insufficient for extracting private key from our distributions due to their "bound-entangled" nature. We propose a simple extension of existing protocols that can extract a key from them.

  19. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  20. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  1. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  2. Frequency modulation television analysis: Threshold impulse analysis. [with computer program

    NASA Technical Reports Server (NTRS)

    Hodge, W. H.

    1973-01-01

    A computer program is developed to calculate the FM threshold impulse rates as a function of the carrier-to-noise ratio for a specified FM system. The system parameters and a vector of 1024 integers, representing the probability density of the modulating voltage, are required as input parameters. The computer program is utilized to calculate threshold impulse rates for twenty-four sets of measured probability data supplied by NASA and for sinusoidal and Gaussian modulating waveforms. As a result of the analysis several conclusions are drawn: (1) The use of preemphasis in an FM television system improves the threshold by reducing the impulse rate. (2) Sinusoidal modulation produces a total impulse rate which is a practical upper bound for the impulse rates of TV signals providing the same peak deviations. (3) As the moment of the FM spectrum about the center frequency of the predetection filter increases, the impulse rate tends to increase. (4) A spectrum having an expected frequency above (below) the center frequency of the predetection filter produces a higher negative (positive) than positive (negative) impulse rate.

  3. Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-07-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.

  4. Neutrino oscillations: what do we know about θ13

    NASA Astrophysics Data System (ADS)

    Ernst, David

    2008-10-01

    The phenomenon of neutrino oscillations is reviewed. A new analysis tool for the recent, more finely binned Super-K atmospheric data is outlined. This analysis incorporates the full three-neutrino oscillation probabilities, including the mixing angle θ13 to all orders, and a full three- neutrino treatment of the Earth's MSW effect. Combined with the K2K, MINOS, and CHOOZ data, the upper bound on θ13 is found to arise from the Super-K atmospheric data, while the lower bound arises from CHOOZ. This is caused by the linear in θ13 terms which are of particualr importance in the region L/E>10^4 m/MeV where the sub-dominant expansion is not convergent. In addition, the enhancement of θ12 by the Earth MSW effect is found to be important for this result. The best fit value of θ13 is found to be (statistically insignificantly) negative and given by θ13=-0.07^+0.18-0.11. In collaboration with Jesus Escamilla, Vanderbilt University and David Latimer, University of Kentucky.

  5. Forecasting neutrino masses from combining KATRIN and the CMB observations: Frequentist and Bayesian analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Host, Ole; Lahav, Ofer; Abdalla, Filipe B.

    We present a showcase for deriving bounds on the neutrino masses from laboratory experiments and cosmological observations. We compare the frequentist and Bayesian bounds on the effective electron neutrino mass m{sub {beta}} which the KATRIN neutrino mass experiment is expected to obtain, using both an analytical likelihood function and Monte Carlo simulations of KATRIN. Assuming a uniform prior in m{sub {beta}}, we find that a null result yields an upper bound of about 0.17 eV at 90% confidence in the Bayesian analysis, to be compared with the frequentist KATRIN reference value of 0.20 eV. This is a significant difference whenmore » judged relative to the systematic and statistical uncertainties of the experiment. On the other hand, an input m{sub {beta}}=0.35 eV, which is the KATRIN 5{sigma} detection threshold, would be detected at virtually the same level. Finally, we combine the simulated KATRIN results with cosmological data in the form of present (post-WMAP) and future (simulated Planck) observations. If an input of m{sub {beta}}=0.2 eV is assumed in our simulations, KATRIN alone excludes a zero neutrino mass at 2.2{sigma}. Adding Planck data increases the probability of detection to a median 2.7{sigma}. The analysis highlights the importance of combining cosmological and laboratory data on an equal footing.« less

  6. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  7. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less

  8. A combinatorial perspective of the protein inference problem.

    PubMed

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2013-01-01

    In a shotgun proteomics experiment, proteins are the most biologically meaningful output. The success of proteomics studies depends on the ability to accurately and efficiently identify proteins. Many methods have been proposed to facilitate the identification of proteins from peptide identification results. However, the relationship between protein identification and peptide identification has not been thoroughly explained before. In this paper, we devote ourselves to a combinatorial perspective of the protein inference problem. We employ combinatorial mathematics to calculate the conditional protein probabilities (protein probability means the probability that a protein is correctly identified) under three assumptions, which lead to a lower bound, an upper bound, and an empirical estimation of protein probabilities, respectively. The combinatorial perspective enables us to obtain an analytical expression for protein inference. Our method achieves comparable results with ProteinProphet in a more efficient manner in experiments on two data sets of standard protein mixtures and two data sets of real samples. Based on our model, we study the impact of unique peptides and degenerate peptides (degenerate peptides are peptides shared by at least two proteins) on protein probabilities. Meanwhile, we also study the relationship between our model and ProteinProphet. We name our program ProteinInfer. Its Java source code, our supplementary document and experimental results are available at: >http://bioinformatics.ust.hk/proteininfer.

  9. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  10. Evolution of cosmic string networks

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas; Turok, Neil

    1989-01-01

    Results on cosmic strings are summarized including: (1) the application of non-equilibrium statistical mechanics to cosmic string evolution; (2) a simple one scale model for the long strings which has a great deal of predictive power; (3) results from large scale numerical simulations; and (4) a discussion of the observational consequences of our results. An upper bound on G mu of approximately 10(-7) emerges from the millisecond pulsar gravity wave bound. How numerical uncertainties affect this are discussed. Any changes which weaken the bound would probably also give the long strings the dominant role in producing observational consequences.

  11. Numerical and analytical bounds on threshold error rates for hypergraph-product codes

    NASA Astrophysics Data System (ADS)

    Kovalev, Alexey A.; Prabhakar, Sanjay; Dumer, Ilya; Pryadko, Leonid P.

    2018-06-01

    We study analytically and numerically decoding properties of finite-rate hypergraph-product quantum low density parity-check codes obtained from random (3,4)-regular Gallager codes, with a simple model of independent X and Z errors. Several nontrivial lower and upper bounds for the decodable region are constructed analytically by analyzing the properties of the homological difference, equal minus the logarithm of the maximum-likelihood decoding probability for a given syndrome. Numerical results include an upper bound for the decodable region from specific heat calculations in associated Ising models and a minimum-weight decoding threshold of approximately 7 % .

  12. Red shift in the spectrum of a chlorophyll species is essential for the drought-induced dissipation of excess light energy in a poikilohydric moss, Bryum argenteum.

    PubMed

    Shibata, Yutaka; Mohamed, Ahmed; Taniyama, Koichiro; Kanatani, Kentaro; Kosugi, Makiko; Fukumura, Hiroshi

    2018-05-01

    Some mosses are extremely tolerant of drought stress. Their high drought tolerance relies on their ability to effectively dissipate absorbed light energy to heat under dry conditions. The energy dissipation mechanism in a drought-tolerant moss, Bryum argenteum, has been investigated using low-temperature picosecond time-resolved fluorescence spectroscopy. The results are compared between moss thalli samples harvested in Antarctica and in Japan. Both samples show almost the same quenching properties, suggesting an identical drought tolerance mechanism for the same species with two completely different habitats. A global target analysis was applied to a large set of data on the fluorescence-quenching dynamics for the 430-nm (chlorophyll-a selective) and 460-nm (chlorophyll-b and carotenoid selective) excitations in the temperature region from 5 to 77 K. This analysis strongly suggested that the quencher is formed in the major peripheral antenna of photosystem II, whose emission spectrum is significantly broadened and red-shifted in its quenched form. Two emission components at around 717 and 725 nm were assigned to photosystem I (PS I). The former component at around 717 nm is mildly quenched and probably bound to the PS I core complex, while the latter at around 725 nm is probably bound to the light-harvesting complex. The dehydration treatment caused a blue shift of the PS I emission peak via reduction of the exciton energy flow to the pigment responsible for the 725 nm band.

  13. An evaluation of risk estimation procedures for mixtures of carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, J.S.; Chen, J.J.

    1999-12-01

    The estimation of health risks from exposure to a mixture of chemical carcinogens is generally based on the combination of information from several available single compound studies. The current practice of directly summing the upper bound risk estimates of individual carcinogenic components as an upper bound on the total risk of a mixture is known to be generally too conservative. Gaylor and Chen (1996, Risk Analysis) proposed a simple procedure to compute an upper bound on the total risk using only the upper confidence limits and central risk estimates of individual carcinogens. The Gaylor-Chen procedure was derived based on anmore » underlying assumption of the normality for the distributions of individual risk estimates. IN this paper the authors evaluated the Gaylor-Chen approach in terms the coverages of the upper confidence limits on the true risks of individual carcinogens. In general, if the coverage probabilities for the individual carcinogens are all approximately equal to the nominal level, then the Gaylor-Chen approach should perform well. However, the Gaylor-Chen approach can be conservative or anti-conservative if some of all individual upper confidence limit estimates are conservative or anti-conservative.« less

  14. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  15. Stability metrics for multi-source biomedical data based on simplicial projections from probability distribution distances.

    PubMed

    Sáez, Carlos; Robles, Montserrat; García-Gómez, Juan M

    2017-02-01

    Biomedical data may be composed of individuals generated from distinct, meaningful sources. Due to possible contextual biases in the processes that generate data, there may exist an undesirable and unexpected variability among the probability distribution functions (PDFs) of the source subsamples, which, when uncontrolled, may lead to inaccurate or unreproducible research results. Classical statistical methods may have difficulties to undercover such variabilities when dealing with multi-modal, multi-type, multi-variate data. This work proposes two metrics for the analysis of stability among multiple data sources, robust to the aforementioned conditions, and defined in the context of data quality assessment. Specifically, a global probabilistic deviation and a source probabilistic outlyingness metrics are proposed. The first provides a bounded degree of the global multi-source variability, designed as an estimator equivalent to the notion of normalized standard deviation of PDFs. The second provides a bounded degree of the dissimilarity of each source to a latent central distribution. The metrics are based on the projection of a simplex geometrical structure constructed from the Jensen-Shannon distances among the sources PDFs. The metrics have been evaluated and demonstrated their correct behaviour on a simulated benchmark and with real multi-source biomedical data using the UCI Heart Disease data set. The biomedical data quality assessment based on the proposed stability metrics may improve the efficiency and effectiveness of biomedical data exploitation and research.

  16. Microscopic observation of magnon bound states and their dynamics.

    PubMed

    Fukuhara, Takeshi; Schauß, Peter; Endres, Manuel; Hild, Sebastian; Cheneau, Marc; Bloch, Immanuel; Gross, Christian

    2013-10-03

    The existence of bound states of elementary spin waves (magnons) in one-dimensional quantum magnets was predicted almost 80 years ago. Identifying signatures of magnon bound states has so far remained the subject of intense theoretical research, and their detection has proved challenging for experiments. Ultracold atoms offer an ideal setting in which to find such bound states by tracking the spin dynamics with single-spin and single-site resolution following a local excitation. Here we use in situ correlation measurements to observe two-magnon bound states directly in a one-dimensional Heisenberg spin chain comprising ultracold bosonic atoms in an optical lattice. We observe the quantum dynamics of free and bound magnon states through time-resolved measurements of two spin impurities. The increased effective mass of the compound magnon state results in slower spin dynamics as compared to single-magnon excitations. We also determine the decay time of bound magnons, which is probably limited by scattering on thermal fluctuations in the system. Our results provide a new way of studying fundamental properties of quantum magnets and, more generally, properties of interacting impurities in quantum many-body systems.

  17. Gravimetric method for in vitro calibration of skin hydration measurements.

    PubMed

    Martinsen, Ørjan G; Grimnes, Sverre; Nilsen, Jon K; Tronstad, Christian; Jang, Wooyoung; Kim, Hongsig; Shin, Kunsoo; Naderi, Majid; Thielmann, Frank

    2008-02-01

    A novel method for in vitro calibration of skin hydration measurements is presented. The method combines gravimetric and electrical measurements and reveals an exponential dependency of measured electrical susceptance to absolute water content in the epidermal stratum corneum. The results also show that absorption of water into the stratum corneum exhibits three different phases with significant differences in absorption time constant. These phases probably correspond to bound, loosely bound, and bulk water.

  18. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  19. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  20. On the probability of cure for heavy-ion radiotherapy

    NASA Astrophysics Data System (ADS)

    Hanin, Leonid; Zaider, Marco

    2014-07-01

    The probability of a cure in radiation therapy (RT)—viewed as the probability of eventual extinction of all cancer cells—is unobservable, and the only way to compute it is through modeling the dynamics of cancer cell population during and post-treatment. The conundrum at the heart of biophysical models aimed at such prospective calculations is the absence of information on the initial size of the subpopulation of clonogenic cancer cells (also called stem-like cancer cells), that largely determines the outcome of RT, both in an individual and population settings. Other relevant parameters (e.g. potential doubling time, cell loss factor and survival probability as a function of dose) are, at least in principle, amenable to empirical determination. In this article we demonstrate that, for heavy-ion RT, microdosimetric considerations (justifiably ignored in conventional RT) combined with an expression for the clone extinction probability obtained from a mechanistic model of radiation cell survival lead to useful upper bounds on the size of the pre-treatment population of clonogenic cancer cells as well as upper and lower bounds on the cure probability. The main practical impact of these limiting values is the ability to make predictions about the probability of a cure for a given population of patients treated to newer, still unexplored treatment modalities from the empirically determined probability of a cure for the same or similar population resulting from conventional low linear energy transfer (typically photon/electron) RT. We also propose that the current trend to deliver a lower total dose in a smaller number of fractions with larger-than-conventional doses per fraction has physical limits that must be understood before embarking on a particular treatment schedule.

  1. The composition of the Martian dark regions: Observations and analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Singer, R. B.

    1980-01-01

    Near infrared telescopic spectrophotometry for dark regions is present and interpreted using laboratory studies of iron bearing mineral mixtures and terrestrial oxidized and unoxidized basalts. Upon closer inspection (by spacecraft) the telescopic dark regions were found to consist of large scale intermixtures of bright soil (aeolian dust) and dark materials. The dark materials themselves consist of an intimate physical association of very fine grained ferric oxide bearing material with relatively high near infrared reflectance and darker, relatively unoxidized rocks or rock fragments. While these two components could exist finely intermixed in a soil, a number of lines of evidence indicate that the usual occurrence is probably a thin coating of physically bound oxidized material. The coated rocks are dark and generally clinopyroxene bearing. The shallow band depths and low overall reflectances indicate that opaque minerals such as magnetite are probably abundant.

  2. Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.

  3. The isolation limits of stochastic vibration

    NASA Technical Reports Server (NTRS)

    Knopse, C. R.; Allaire, P. E.

    1993-01-01

    The vibration isolation problem is formulated as a 1D kinematic problem. The geometry of the stochastic wall trajectories arising from the stroke constraint is defined in terms of their significant extrema. An optimal control solution for the minimum acceleration return path determines a lower bound on platform mean square acceleration. This bound is expressed in terms of the probability density function on the significant maxima and the conditional fourth moment of the first passage time inverse. The first of these is found analytically while the second is found using a Monte Carlo simulation. The rms acceleration lower bound as a function of available space is then determined through numerical quadrature.

  4. Inflectional instabilities in the wall region of bounded turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Swearingen, Jerry D.; Blackwelder, Ron F.; Spalart, Philippe R.

    1987-01-01

    The primary thrust of this research was to identify one or more mechanisms responsible for strong turbulence production events in the wall region of bounded turbulent shear flows. Based upon previous work in a transitional boundary layer, it seemed highly probable that the production events were preceded by an inflectional velocity profile which formed on the interface between the low-speed streak and the surrounding fluid. In bounded transitional flows, this unstable profile developed velocity fluctuations in the streamwise direction and in the direction perpendicular to the sheared surface. The rapid growth of these instabilities leads to a breakdown and production of turbulence. Since bounded turbulent flows have many of the same characteristics, they may also experience a similar type of breakdown and turbulence production mechanism.

  5. An alternative empirical likelihood method in missing response problems and causal inference.

    PubMed

    Ren, Kaili; Drummond, Christopher A; Brewster, Pamela S; Haller, Steven T; Tian, Jiang; Cooper, Christopher J; Zhang, Biao

    2016-11-30

    Missing responses are common problems in medical, social, and economic studies. When responses are missing at random, a complete case data analysis may result in biases. A popular debias method is inverse probability weighting proposed by Horvitz and Thompson. To improve efficiency, Robins et al. proposed an augmented inverse probability weighting method. The augmented inverse probability weighting estimator has a double-robustness property and achieves the semiparametric efficiency lower bound when the regression model and propensity score model are both correctly specified. In this paper, we introduce an empirical likelihood-based estimator as an alternative to Qin and Zhang (2007). Our proposed estimator is also doubly robust and locally efficient. Simulation results show that the proposed estimator has better performance when the propensity score is correctly modeled. Moreover, the proposed method can be applied in the estimation of average treatment effect in observational causal inferences. Finally, we apply our method to an observational study of smoking, using data from the Cardiovascular Outcomes in Renal Atherosclerotic Lesions clinical trial. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  7. Near-Earth Phase Risk Comparison of Human Mars Campaign Architectures

    NASA Technical Reports Server (NTRS)

    Manning, Ted A.; Nejad, Hamed S.; Mattenberger, Chris

    2013-01-01

    A risk analysis of the launch, orbital assembly, and Earth-departure phases of human Mars exploration campaign architectures was completed as an extension of a probabilistic risk assessment (PRA) originally carried out under the NASA Constellation Program Ares V Project. The objective of the updated analysis was to study the sensitivity of loss-of-campaign risk to such architectural factors as composition of the propellant delivery portion of the launch vehicle fleet (Ares V heavy-lift launch vehicle vs. smaller/cheaper commercial launchers) and the degree of launcher or Mars-bound spacecraft element sparing. Both a static PRA analysis and a dynamic, event-based Monte Carlo simulation were developed and used to evaluate the probability of loss of campaign under different sparing options. Results showed that with no sparing, loss-of-campaign risk is strongly driven by launcher count and on-orbit loiter duration, favoring an all-Ares V launch approach. Further, the reliability of the all-Ares V architecture showed significant improvement with the addition of a single spare launcher/payload. Among architectures utilizing a mix of Ares V and commercial launchers, those that minimized the on-orbit loiter duration of Mars-bound elements were found to exceed the reliability of no spare all-Ares V campaign if unlimited commercial vehicle sparing was assumed

  8. A Verification-Driven Approach to Control Analysis and Tuning

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2008-01-01

    This paper proposes a methodology for the analysis and tuning of controllers using control verification metrics. These metrics, which are introduced in a companion paper, measure the size of the largest uncertainty set of a given class for which the closed-loop specifications are satisfied. This framework integrates deterministic and probabilistic uncertainty models into a setting that enables the deformation of sets in the parameter space, the control design space, and in the union of these two spaces. In regard to control analysis, we propose strategies that enable bounding regions of the design space where the specifications are satisfied by all the closed-loop systems associated with a prescribed uncertainty set. When this is unfeasible, we bound regions where the probability of satisfying the requirements exceeds a prescribed value. In regard to control tuning, we propose strategies for the improvement of the robust characteristics of a baseline controller. Some of these strategies use multi-point approximations to the control verification metrics in order to alleviate the numerical burden of solving a min-max problem. Since this methodology targets non-linear systems having an arbitrary, possibly implicit, functional dependency on the uncertain parameters and for which high-fidelity simulations are available, they are applicable to realistic engineering problems..

  9. Border screening vs. community level disease control for infectious diseases: Timing and effectiveness

    NASA Astrophysics Data System (ADS)

    Kim, Sehjeong; Chang, Dong Eui

    2017-06-01

    There have been many studies of the border screening using a simple math model or a statistical analysis to investigate the ineffectiveness of border screening during 2003 and 2009 pandemics. However, the use of border screening is still a controversial issue. It is due to focusing only on the functionality of border screening without considering the timing to use. In this paper, we attempt to qualitatively answer whether the use of border screening is a desirable action during a disease pandemic. Thus, a novel mathematical model with a transition probability of status change during flight and border screening is developed. A condition to check a timing of the border screening is established in terms of a lower bound of the basic reproduction number. If the lower bound is greater than one, which indicates a pandemic, then the border screening may not be effective and the disease persists. In this case, a community level control strategy should be conducted.

  10. Electronic and rovibrational quantum chemical analysis of C3P-: the next interstellar anion?

    NASA Astrophysics Data System (ADS)

    Fortenberry, Ryan C.; Lukemire, Joseph A.

    2015-11-01

    C3P- is analogous to the known interstellar anion C3N- with phosphorus replacing nitrogen in a simple step down the periodic table. In this work, it is shown that C3P- is likely to possess a dipole-bound excited state. It has been hypothesized and observationally supported that dipole-bound excited states are an avenue through which anions could be formed in the interstellar medium. Additionally, C3P- has a valence excited state that may lead to further stabilization of this molecule, and C3P- has a larger dipole moment than neutral C3P (˜6 D versus ˜4 D). As such, C3P- is probably a more detectable astromolecule than even its corresponding neutral radical. Highly accurate quantum chemical quartic force fields are also applied to C3P- and its singly 13C substituted isotopologues in order to provide structures, vibrational frequencies, and spectroscopic constants that may aid in its detection.

  11. Enclosure fire hazard analysis using relative energy release criteria. [burning rate and combustion control

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1978-01-01

    A method for predicting the probable course of fire development in an enclosure is presented. This fire modeling approach uses a graphic plot of five fire development constraints, the relative energy release criteria (RERC), to bound the heat release rates in an enclosure as a function of time. The five RERC are flame spread rate, fuel surface area, ventilation, enclosure volume, and total fuel load. They may be calculated versus time based on the specified or empirical conditions describing the specific enclosure, the fuel type and load, and the ventilation. The calculation of these five criteria, using the common basis of energy release rates versus time, provides a unifying framework for the utilization of available experimental data from all phases of fire development. The plot of these criteria reveals the probable fire development envelope and indicates which fire constraint will be controlling during a criteria time period. Examples of RERC application to fire characterization and control and to hazard analysis are presented along with recommendations for the further development of the concept.

  12. Nuclear rainbow in elastic scattering of {sup 9}Be nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glukhov, Yu. A., E-mail: gloukhov@inbox.ru; Ogloblin, A. A.; Artemov, K. P.

    2010-01-15

    A systematic investigation of the elastic scattering of the {sup 9}Be nucleus, which is among themost loosely bound stable nuclei was performed.Differential cross sections for elastic {sup 9}Be + {sup 16}O scattering were measured at a c.m. energy of 47.5 MeV (beam of 132-MeV {sup 16}O nuclei). Available data at different energy values and data for neighboring nuclei were included in our analysis. As a result, the very fact of rainbow scattering was reliably established for the first time in systems involving {sup 9}Be. In addition, the analysis in question made it possible to identify Airy minima and to determinemore » unambiguously the nucleus-nucleus potential with a high probability.« less

  13. What is epistemic value in free energy models of learning and acting? A bounded rationality perspective.

    PubMed

    Ortega, Pedro A; Braun, Daniel A

    2015-01-01

    Free energy models of learning and acting do not only care about utility or extrinsic value, but also about intrinsic value, that is, the information value stemming from probability distributions that represent beliefs or strategies. While these intrinsic values can be interpreted as epistemic values or exploration bonuses under certain conditions, the framework of bounded rationality offers a complementary interpretation in terms of information-processing costs that we discuss here.

  14. Implications of the Super-K atmospheric, long baseline, and reactor data for the mixing angles θ13 and θ23

    NASA Astrophysics Data System (ADS)

    Escamilla-Roa, J.; Latimer, D. C.; Ernst, D. J.

    2010-01-01

    A three-neutrino analysis of oscillation data is performed using the recent, more finely binned Super-K oscillation data, together with the CHOOZ, K2K, and MINOS data. The solar parameters Δ21 and θ12 are fixed from a recent analysis and Δ32, θ13, and θ23 are varied. We utilize the full three-neutrino oscillation probability and an exact treatment of Earth’s Mikheyev-Smirnov-Wolfenstein (MSW) effect with a castle-wall density. By including terms linear in θ13 and ɛ:=θ23-π/4, we find asymmetric errors for these parameters θ13=-0.07-0.11+0.18 and ɛ=0.03-0.15+0.09. For θ13, we see that the lower bound is primarily set by the CHOOZ experiment while the upper bound is determined by the low energy e-like events in the Super-K atmospheric data. We find that the parameters θ13 and ɛ are correlated—the preferred negative value of θ13 permits the preferred value of θ23 to be in the second octant, and the true value of θ13 affects the allowed region for θ23.

  15. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  16. Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints

    NASA Astrophysics Data System (ADS)

    Nocquet, J.-M.

    2018-04-01

    Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.

  17. Quantum probability assignment limited by relativistic causality.

    PubMed

    Han, Yeong Deok; Choi, Taeseung

    2016-03-14

    Quantum theory has nonlocal correlations, which bothered Einstein, but found to satisfy relativistic causality. Correlation for a shared quantum state manifests itself, in the standard quantum framework, by joint probability distributions that can be obtained by applying state reduction and probability assignment that is called Born rule. Quantum correlations, which show nonlocality when the shared state has an entanglement, can be changed if we apply different probability assignment rule. As a result, the amount of nonlocality in quantum correlation will be changed. The issue is whether the change of the rule of quantum probability assignment breaks relativistic causality. We have shown that Born rule on quantum measurement is derived by requiring relativistic causality condition. This shows how the relativistic causality limits the upper bound of quantum nonlocality through quantum probability assignment.

  18. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  19. Prospect balancing theory: Bounded rationality of drivers' speed choice.

    PubMed

    Schmidt-Daffy, Martin

    2014-02-01

    This paper introduces a new approach to model the psychological determinants of drivers' speed choice: prospect-balancing theory. The theory transfers psychological insight into the bounded rationality of human decision-making to the field of driving behaviour. Speed choice is conceptualized as a trade-off between two options for action: the option to drive slower and the option to drive faster. Each option is weighted according to a subjective value and a subjectively weighted probability attributed to the achievement of the associated action goal; e.g. to avoid an accident by driving more slowly. The theory proposes that the subjective values and weightings of probability differ systematically from the objective conditions and thereby usually favour a cautious speed choice. A driving simulation study with 24 male participants supports this assumption. In a conflict between a monetary gain in case of fast arrival and a monetary loss in case of a collision with a deer, participants chose a velocity lower than that which would maximize their pay-out. Participants' subjective certainty of arriving in time and of avoiding a deer collision assessed at different driving speeds diverged from the respective objective probabilities in accordance with the observed bias in choice of speed. Results suggest that the bounded rationality of drivers' speed choice might be used to support attempts to improve road safety. Thus, understanding the motivational and perceptual determinants of this intuitive mode of decision-making might be a worthwhile focus of future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  1. SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.

  2. Achieving unequal error protection with convolutional codes

    NASA Technical Reports Server (NTRS)

    Mills, D. G.; Costello, D. J., Jr.; Palazzo, R., Jr.

    1994-01-01

    This paper examines the unequal error protection capabilities of convolutional codes. Both time-invariant and periodically time-varying convolutional encoders are examined. The effective free distance vector is defined and is shown to be useful in determining the unequal error protection (UEP) capabilities of convolutional codes. A modified transfer function is used to determine an upper bound on the bit error probabilities for individual input bit positions in a convolutional encoder. The bound is heavily dependent on the individual effective free distance of the input bit position. A bound relating two individual effective free distances is presented. The bound is a useful tool in determining the maximum possible disparity in individual effective free distances of encoders of specified rate and memory distribution. The unequal error protection capabilities of convolutional encoders of several rates and memory distributions are determined and discussed.

  3. Evidence against the involvement of ionically bound cell wall proteins in pea epicotyl growth

    NASA Technical Reports Server (NTRS)

    Melan, M. A.; Cosgrove, D. J.

    1988-01-01

    Ionically bound cell wall proteins were extracted from 7 day old etiolated pea (Pisum sativum L. cv Alaska) epicotyls with 3 molar LiCl. Polyclonal antiserum was raised in rabbits against the cell wall proteins. Growth assays showed that treatment of growing region segments (5-7 millimeters) of peas with either dialyzed serum, serum globulin fraction, affinity purified immunoglobulin, or papain-cleaved antibody fragments had no effect on growth. Immunofluorescence microscopy confirmed antibody binding to cell walls and penetration of the antibodies into the tissues. Western blot analysis, immunoassay results, and affinity chromatography utilizing Sepharose-bound antibodies confirmed recognition of the protein preparation by the antibodies. Experiments employing in vitro extension as a screening measure indicated no effect upon extension by antibodies, by 50 millimolar LiCl perfusion of the apoplast or by 3 molar LiCl extraction. Addition of cell wall protein to protease pretreated segments did not restore extension nor did addition of cell wall protein to untreated segments increase extension. It is concluded that, although evidence suggests that protein is responsible for the process of extension, the class(es) of proteins which are extracted from pea cell walls with 3 molar LiCl are probably not involved in this process.

  4. Stimulus-induced, sleep-bound, focal seizures: a case report.

    PubMed

    Siclari, Francesca; Nobili, Lino; Lo Russo, Giorgio; Moscato, Alessio; Buck, Alfred; Bassetti, Claudio L; Khatami, Ramin

    2011-12-01

    In nocturnal frontal lobe epilepsy (NFLE), seizures occur almost exclusively during NREM sleep. Why precisely these seizures are sleep-bound remains unknown. Studies of patients with nonlesional familial forms of NFLE have suggested the arousal system may play a major role in their pathogenesis. We report the case of a patient with pharmaco-resistant, probably cryptogenic form of non-familial NFLE and strictly sleep-bound seizures that could be elicited by alerting stimuli and were associated with ictal bilateral thalamic and right orbital-insular hyperperfusion on SPECT imaging. Case report. University Hospital Zurich. One patient with pharmaco-resistant epilepsy. This case shows that the arousal system plays a fundamental role also in cryptogenic non-familial forms of NFLE.

  5. Algorithms for Differential Games with Bounded Control and States.

    DTIC Science & Technology

    1982-03-01

    D-R124 642 ALGORITHMS FOR DIFFERENTIAL GAMES WI1TH BOUNDED CONTROL 1/2 AND STATES(U) CALIFORNIA UNIV LOS ANGELES SCHOOL OF ENGINEERING AND APPLIED...RECIPILNT’S CATALOG NUMBER None ~_________ TITLE (end Subtitle) S. TYPE OF REPORT P ERIOD COVERED ALGORITHMS FOR DIFFERENTIAL GAMES WITH Final, 11/29/79-11/28...problems are probably the most natural application of differential game theory and have been treated by many authors as such. Very few problems of this

  6. A risk assessment method for multi-site damage

    NASA Astrophysics Data System (ADS)

    Millwater, Harry Russell, Jr.

    This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.

  7. Graph Theory-Based Pinning Synchronization of Stochastic Complex Dynamical Networks.

    PubMed

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-02-01

    This paper is concerned with the adaptive pinning synchronization problem of stochastic complex dynamical networks (CDNs). Based on algebraic graph theory and Lyapunov theory, pinning controller design conditions are derived, and the rigorous convergence analysis of synchronization errors in the probability sense is also conducted. Compared with the existing results, the topology structures of stochastic CDN are allowed to be unknown due to the use of graph theory. In particular, it is shown that the selection of nodes for pinning depends on the unknown lower bounds of coupling strengths. Finally, an example on a Chua's circuit network is given to validate the effectiveness of the theoretical results.

  8. Analysis of potential hazards associated with 241Am loaded resins from nitrate media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulte, Louis D.; Rubin, Jim; Fife, Keith William

    2016-02-19

    LANL has been contacted to provide possible assistance in safe disposition of a number of 241Am-bearing materials associated with local industrial operations. Among the materials are ion exchange resins which have been in contact with 241Am and nitric acid, and which might have potential for exothermic reaction. The purpose of this paper is to analyze and define the resin forms and quantities to the extent possible from available data to allow better bounding of the potential reactivity hazard of the resin materials. An additional purpose is to recommend handling procedures to minimize the probability of an uncontrolled exothermic reaction.

  9. State space truncation with quantified errors for accurate solutions to discrete Chemical Master Equation

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653

  10. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  11. State Space Truncation with Quantified Errors for Accurate Solutions to Discrete Chemical Master Equation

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-04-22

    The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less

  12. A unified approach for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties

    NASA Astrophysics Data System (ADS)

    Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie

    2017-09-01

    Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.

  13. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.

  14. Imprecise probability assessment of tipping points in the climate system

    PubMed Central

    Kriegler, Elmar; Hall, Jim W.; Held, Hermann; Dawson, Richard; Schellnhuber, Hans Joachim

    2009-01-01

    Major restructuring of the Atlantic meridional overturning circulation, the Greenland and West Antarctic ice sheets, the Amazon rainforest and ENSO, are a source of concern for climate policy. We have elicited subjective probability intervals for the occurrence of such major changes under global warming from 43 scientists. Although the expert estimates highlight large uncertainty, they allocate significant probability to some of the events listed above. We deduce conservative lower bounds for the probability of triggering at least 1 of those events of 0.16 for medium (2–4 °C), and 0.56 for high global mean temperature change (above 4 °C) relative to year 2000 levels. PMID:19289827

  15. Higher order Stark effect and transition probabilities on hyperfine structure components of hydrogen like atoms

    NASA Astrophysics Data System (ADS)

    Pal'Chikov, V. G.

    2000-08-01

    A quantum-electrodynamical (QED) perturbation theory is developed for hydrogen and hydrogen-like atomic systems with interaction between bound electrons and radiative field being treated as the perturbation. The dependence of the perturbed energy of levels on hyperfine structure (hfs) effects and on the higher-order Stark effect is investigated. Numerical results have been obtained for the transition probability between the hfs components of hydrogen-like bismuth.

  16. Probability of undetected error after decoding for a concatenated coding scheme

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Lin, S.

    1984-01-01

    A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.

  17. Hydrogen Hotspots on Vesta

    NASA Image and Video Library

    2012-09-20

    This image shows that NASA Dawn mission detected abundances of hydrogen in a wide swath around the equator of the giant asteroid Vesta. The hydrogen probably exists in the form of hydroxyl or water bound to minerals in Vesta surface.

  18. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  19. Small violations of Bell inequalities for multipartite pure random states

    NASA Astrophysics Data System (ADS)

    Drumond, Raphael C.; Duarte, Cristhiano; Oliveira, Roberto I.

    2018-05-01

    For any finite number of parts, measurements, and outcomes in a Bell scenario, we estimate the probability of random N-qudit pure states to substantially violate any Bell inequality with uniformly bounded coefficients. We prove that under some conditions on the local dimension, the probability to find any significant amount of violation goes to zero exponentially fast as the number of parts goes to infinity. In addition, we also prove that if the number of parts is at least 3, this probability also goes to zero as the local Hilbert space dimension goes to infinity.

  20. Producing the deuteron in stars: anthropic limits on fundamental constants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, Luke A.; Lewis, Geraint F., E-mail: luke.barnes@sydney.edu.au, E-mail: gfl@physics.usyd.edu.au

    2017-07-01

    Stellar nucleosynthesis proceeds via the deuteron (D), but only a small change in the fundamental constants of nature is required to unbind it. Here, we investigate the effect of altering the binding energy of the deuteron on proton burning in stars. We find that the most definitive boundary in parameter space that divides probably life-permitting universes from probably life-prohibiting ones is between a bound and unbound deuteron. Due to neutrino losses, a ball of gas will undergo rapid cooling or stabilization by electron degeneracy pressure before it can form a stable, nuclear reaction-sustaining star. We also consider a less-bound deuteron,more » which changes the energetics of the pp and pep reactions. The transition to endothermic pp and pep reactions, and the resulting beta-decay instability of the deuteron, do not seem to present catastrophic problems for life.« less

  1. Radio-nuclide mixture identification using medium energy resolution detectors

    DOEpatents

    Nelson, Karl Einar

    2013-09-17

    According to one embodiment, a method for identifying radio-nuclides includes receiving spectral data, extracting a feature set from the spectral data comparable to a plurality of templates in a template library, and using a branch and bound method to determine a probable template match based on the feature set and templates in the template library. In another embodiment, a device for identifying unknown radio-nuclides includes a processor, a multi-channel analyzer, and a memory operatively coupled to the processor, the memory having computer readable code stored thereon. The computer readable code is configured, when executed by the processor, to receive spectral data, to extract a feature set from the spectral data comparable to a plurality of templates in a template library, and to use a branch and bound method to determine a probable template match based on the feature set and templates in the template library.

  2. On the capacity of ternary Hebbian networks

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1991-01-01

    Networks of ternary neurons storing random vectors over the set -1,0,1 by the so-called Hebbian rule are considered. It is shown that the maximal number of stored patterns that are equilibrium states of the network with probability tending to one as N tends to infinity is at least on the order of (N exp 2-1/alpha)/K, where N is the number of neurons, K is the number of nonzero elements in a pattern, and t = alpha x K, alpha between 1/2 and 1, is the threshold in the neuron function. While, for small K, this bound is similar to that obtained for fully connected binary networks, the number of interneural connections required in the ternary case is considerably smaller. Similar bounds, incorporating error probabilities, are shown to guarantee, in the same probabilistic sense, the correction of errors in the nonzero elements and in the location of these elements.

  3. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  4. FTC - THE FAULT-TREE COMPILER (SUN VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  5. FTC - THE FAULT-TREE COMPILER (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1994-01-01

    FTC, the Fault-Tree Compiler program, is a tool used to calculate the top-event probability for a fault-tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. The high-level input language is easy to understand and use. In addition, the program supports a hierarchical fault tree definition feature which simplifies the tree-description process and reduces execution time. A rigorous error bound is derived for the solution technique. This bound enables the program to supply an answer precisely (within the limits of double precision floating point arithmetic) at a user-specified number of digits accuracy. The program also facilitates sensitivity analysis with respect to any specified parameter of the fault tree such as a component failure rate or a specific event probability by allowing the user to vary one failure rate or the failure probability over a range of values and plot the results. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. FTC was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The program is written in PASCAL, ANSI compliant C-language, and FORTRAN 77. The TEMPLATE graphics library is required to obtain graphical output. The standard distribution medium for the VMS version of FTC (LAR-14586) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of FTC (LAR-14922) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. FTC was developed in 1989 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. SunOS is a trademark of Sun Microsystems, Inc.

  6. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  7. Discriminating quantum-optical beam-splitter channels with number-diagonal signal states: Applications to quantum reading and target detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Ranjith

    2011-09-15

    We consider the problem of distinguishing, with minimum probability of error, two optical beam-splitter channels with unequal complex-valued reflectivities using general quantum probe states entangled over M signal and M' idler mode pairs of which the signal modes are bounced off the beam splitter while the idler modes are retained losslessly. We obtain a lower bound on the output state fidelity valid for any pure input state. We define number-diagonal signal (NDS) states to be input states whose density operator in the signal modes is diagonal in the multimode number basis. For such input states, we derive series formulas formore » the optimal error probability, the output state fidelity, and the Chernoff-type upper bounds on the error probability. For the special cases of quantum reading of a classical digital memory and target detection (for which the reflectivities are real valued), we show that for a given input signal photon probability distribution, the fidelity is minimized by the NDS states with that distribution and that for a given average total signal energy N{sub s}, the fidelity is minimized by any multimode Fock state with N{sub s} total signal photons. For reading of an ideal memory, it is shown that Fock state inputs minimize the Chernoff bound. For target detection under high-loss conditions, a no-go result showing the lack of appreciable quantum advantage over coherent state transmitters is derived. A comparison of the error probability performance for quantum reading of number state and two-mode squeezed vacuum state (or EPR state) transmitters relative to coherent state transmitters is presented for various values of the reflectances. While the nonclassical states in general perform better than the coherent state, the quantitative performance gains differ depending on the values of the reflectances. The experimental outlook for realizing nonclassical gains from number state transmitters with current technology at moderate to high values of the reflectances is argued to be good.« less

  8. Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data.

    PubMed

    Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo

    2018-06-25

    The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.

  9. Target intersection probabilities for parallel-line and continuous-grid types of search

    USGS Publications Warehouse

    McCammon, R.B.

    1977-01-01

    The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.

  10. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  11. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  12. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  13. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  14. Optimal Universal Uncertainty Relations

    PubMed Central

    Li, Tao; Xiao, Yunlong; Ma, Teng; Fei, Shao-Ming; Jing, Naihuan; Li-Jost, Xianqing; Wang, Zhi-Xi

    2016-01-01

    We study universal uncertainty relations and present a method called joint probability distribution diagram to improve the majorization bounds constructed independently in [Phys. Rev. Lett. 111, 230401 (2013)] and [J. Phys. A. 46, 272002 (2013)]. The results give rise to state independent uncertainty relations satisfied by any nonnegative Schur-concave functions. On the other hand, a remarkable recent result of entropic uncertainty relation is the direct-sum majorization relation. In this paper, we illustrate our bounds by showing how they provide a complement to that in [Phys. Rev. A. 89, 052115 (2014)]. PMID:27775010

  15. Precision measurement of the electromagnetic dipole strengths in Be11

    NASA Astrophysics Data System (ADS)

    Kwan, E.; Wu, C. Y.; Summers, N. C.; Hackman, G.; Drake, T. E.; Andreoiu, C.; Ashley, R.; Ball, G. C.; Bender, P. C.; Boston, A. J.; Boston, H. C.; Chester, A.; Close, A.; Cline, D.; Cross, D. S.; Dunlop, R.; Finlay, A.; Garnsworthy, A. B.; Hayes, A. B.; Laffoley, A. T.; Nano, T.; Navrátil, P.; Pearson, C. J.; Pore, J.; Quaglioni, S.; Svensson, C. E.; Starosta, K.; Thompson, I. J.; Voss, P.; Williams, S. J.; Wang, Z. M.

    2014-05-01

    The electromagnetic dipole strength in Be11 between the bound states has been measured using low-energy projectile Coulomb excitation at bombarding energies of 1.73 and 2.09 MeV/nucleon on a Pt196 target. An electric dipole transition probability B(E1;1/2-→1/2+)=0.102(2) e2fm was determined using the semi-classical code Gosia, and a value of 0.098(4) e2fm was determined using the Extended Continuum Discretized Coupled Channels method with the quantum mechanical code FRESCO. These extracted B(E1) values are consistent with the average value determined by a model-dependent analysis of intermediate energy Coulomb excitation measurements and are approximately 14% lower than that determined by a lifetime measurement. The much-improved precisions of 2% and 4% in the measured B(E1) values between the bound states deduced using Gosia and the Extended Continuum Discretized Coupled Channels method, respectively, compared to the previous accuracy of ˜10% will help in our understanding of and better improve the realistic inter-nucleon interactions.

  16. 20007: Quantum particle displacement by a moving localized potential trap

    NASA Astrophysics Data System (ADS)

    Granot, E.; Marchewka, A.

    2009-04-01

    We describe the dynamics of a bound state of an attractive δ-well under displacement of the potential. Exact analytical results are presented for the suddenly moved potential. Since this is a quantum system, only a fraction of the initially confined wave function remains confined to the moving potential. However, it is shown that besides the probability to remain confined to the moving barrier and the probability to remain in the initial position, there is also a certain probability for the particle to move at double speed. A quasi-classical interpretation for this effect is suggested. The temporal and spectral dynamics of each one of the scenarios is investigated.

  17. SIERRA ANCHA WILDERNESS, ARIZONA.

    USGS Publications Warehouse

    Wrucke, Chester T.; Light, Thomas D.

    1984-01-01

    Mineral surveys show that the Sierra Ancha Wilderness in Arizona has demonstrated resources of uranium, asbestos, and iron; probable and substantiated resource potential for uranium, asbestos, and iron; and a probable resource potential for fluorspar. Uranium resources occur in vein and strata-bound deposits in siltstone that underlies much of the wilderness. Deposits of long-staple chrysotile asbestos are likely in parts of the wilderness adjacent to known areas of asbestos production. Magnetite deposits in the wilderness form a small iron resource. No fossil fuel resources were identified in this study.

  18. The calculation of average error probability in a digital fibre optical communication system

    NASA Astrophysics Data System (ADS)

    Rugemalira, R. A. M.

    1980-03-01

    This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity

  19. Exact one-sided confidence limits for the difference between two correlated proportions.

    PubMed

    Lloyd, Chris J; Moldovan, Max V

    2007-08-15

    We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.

  20. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...

  1. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...

  2. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...

  3. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...

  4. 10 CFR 63.114 - Requirements for performance assessment.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Technical Criteria Postclosure Performance Assessment § 63..., hydrology, and geochemistry (including disruptive processes and events) of the Yucca Mountain site, and the... disposal, and provide for the technical basis for parameter ranges, probability distributions, or bounding...

  5. Effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model output

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.

    2012-04-01

    This study analyses the effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model's discharge estimates. Prediction uncertainty bounds are derived using the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The model analysed is a conceptual watershed model operating at a monthly time step. The model divides the catchment into five elevation zones, where the fifth zone corresponds to the catchment glaciers. Precipitation amounts at each elevation zone i are estimated as the product between observed precipitation (at a single station within the catchment) and a precipitation factor FPi. Thus, these factors provide a simplified representation of the spatial variation of precipitation, specifically the shape of the functional relationship between precipitation and height. In the absence of information about appropriate values of the precipitation factors FPi, these are estimated through standard calibration procedures. The catchment case study is Aconcagua River at Chacabuquito, located in the Andean region of Central Chile. Monte Carlo samples of the model output are obtained by randomly varying the model parameters within their feasible ranges. In the first experiment, the precipitation factors FPi are considered unknown and thus included in the sampling process. The total number of unknown parameters in this case is 16. In the second experiment, precipitation factors FPi are estimated a priori, by means of a long term water balance between observed discharge at the catchment outlet, evapotranspiration estimates and observed precipitation. In this case, the number of unknown parameters reduces to 11. The feasible ranges assigned to the precipitation factors in the first experiment are slightly wider than the range of fixed precipitation factors used in the second experiment. The mean squared error of the Box-Cox transformed discharge during the calibration period is used for the evaluation of the goodness of fit of the model realizations. GLUE-type uncertainty bounds during the verification period are derived at the probability levels p=85%, 90% and 95%. Results indicate that, as expected, prediction uncertainty bounds indeed change if precipitation factors FPi are estimated a priori rather than being allowed to vary, but that this change is not dramatic. Firstly, the width of the uncertainty bounds at the same probability level only slightly reduces compared to the case where precipitation factors are allowed to vary. Secondly, the ability to enclose the observations improves, but the decrease in the fraction of outliers is not significant. These results are probably due to the narrow range of variability allowed to the precipitation factors FPi in the first experiment, which implies that although they indicate the shape of the functional relationship between precipitation and height, the magnitude of precipitation estimates were mainly determined by the magnitude of the observations at the available raingauge. It is probable that the situation where no prior information is available on the realistic ranges of variation of the precipitation factors, and the inclusion of precipitation data uncertainty, would have led to a different conclusion. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.

  6. Where is the nitrogen on Mars?

    NASA Astrophysics Data System (ADS)

    Mancinelli, Rocco L.; Banin, Amos

    2003-07-01

    Nitrogen is an essential element for life. Specifically, fixed nitrogen (i.e. NH3, NH4+, NOx or N that is chemically bound to either inorganic or organic molecules and can be released by hydrolysis to form NH3 or NH4+) is useful to living organisms. Nitrogen on present-day Mars has been analysed only in the atmosphere. The inventory is a small fraction of the amount of nitrogen presumed to have been received by the planet during its accretion. Where is the missing nitrogen? Answering this question is crucial for understanding the probability of the origin and evolution of life on Mars, and for its future astrobiological exploration. The two main processes that could have removed nitrogen from the atmosphere include: (1) non-thermal escape of N atoms to space and (2) burial within the regolith as nitrates and ammonium salts. Nitrate would probably be stable in the highly oxidized surface soil of Mars and could have served as an NO3[minus sign] sink. Such accumulations are observed in certain desert environments on Earth. Some NH4+ nitrogen may also be fixed and stabilized in the soil by inclusion as a structural cation in the crystal lattices of certain phyllosilicates replacing K+. Analysis of the Martian soil for traces of NO3[minus sign] and NH4+ during future missions will provide important information regarding the nitrogen abundance on Mars. We hypothesize that Mars soil, as typical of extremely dry desert soils on Earth, is likely to contain at least some of the missing nitrogen as nitrate salts and some fixed ammonium bound to aluminosilicate minerals.

  7. Trapping of quantum particles and light beams by switchable potential wells

    NASA Astrophysics Data System (ADS)

    Sonkin, Eduard; Malomed, Boris A.; Granot, Er'El; Marchewka, Avi

    2010-09-01

    We consider basic dynamical effects in settings based on a pair of local potential traps that may be effectively switched on and off, or suddenly displaced, by means of appropriate control mechanisms, such as scanning tunneling microscopy or photo-switchable quantum dots. The same models, based on the linear Schrödinger equation with time-dependent trapping potentials, apply to the description of optical planar systems designed for the switching of trapped light beams. The analysis is carried out in the analytical form, using exact solutions of the Schrödinger equation. The first dynamical problem considered in this work is the retention of a particle released from a trap which was suddenly turned off, while another local trap was switched on at a distance—immediately or with a delay. In this case, we demonstrate that the maximum of the retention rate is achieved at a specific finite value of the strength of the new trap, and at a finite value of the temporal delay, depending on the distance between the two traps. Another problem is retrapping of the bound particle when the addition of the second trap transforms the single-well setting into a double-well potential (DWP). In that case, we find probabilities for the retrapping into the ground or first excited state of the DWP. We also analyze effects entailed by the application of a kick to a bound particle, the most interesting one being a kick-induced transition between the DWP’s ground and excited states. In the latter case, the largest transition probability is achieved at a particular strength of the kick.

  8. ``Carbon Credits'' for Resource-Bounded Computations Using Amortised Analysis

    NASA Astrophysics Data System (ADS)

    Jost, Steffen; Loidl, Hans-Wolfgang; Hammond, Kevin; Scaife, Norman; Hofmann, Martin

    Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems. In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we obtain less tight bounds, this is due to the use of software floating-point libraries.

  9. Adjusting survival estimates for premature transmitter failure: A case study from the Sacramento-San Joaquin Delta

    USGS Publications Warehouse

    Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.

    2013-01-01

    In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.

  10. Calculation of the number of Monte Carlo histories for a planetary protection probability of impact estimation

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack

    2016-07-01

    Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.

  11. Conservative Analytical Collision Probabilities for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  12. Conservative Analytical Collision Probability for Design of Orbital Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  13. Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.

  14. Concepts and Bounded Rationality: An Application of Niestegge's Approach to Conditional Quantum Probabilities

    NASA Astrophysics Data System (ADS)

    Blutner, Reinhard

    2009-03-01

    Recently, Gerd Niestegge developed a new approach to quantum mechanics via conditional probabilities developing the well-known proposal to consider the Lüders-von Neumann measurement as a non-classical extension of probability conditionalization. I will apply his powerful and rigorous approach to the treatment of concepts using a geometrical model of meaning. In this model, instances are treated as vectors of a Hilbert space H. In the present approach there are at least two possibilities to form categories. The first possibility sees categories as a mixture of its instances (described by a density matrix). In the simplest case we get the classical probability theory including the Bayesian formula. The second possibility sees categories formed by a distinctive prototype which is the superposition of the (weighted) instances. The construction of prototypes can be seen as transferring a mixed quantum state into a pure quantum state freezing the probabilistic characteristics of the superposed instances into the structure of the formed prototype. Closely related to the idea of forming concepts by prototypes is the existence of interference effects. Such inference effects are typically found in macroscopic quantum systems and I will discuss them in connection with several puzzles of bounded rationality. The present approach nicely generalizes earlier proposals made by authors such as Diederik Aerts, Andrei Khrennikov, Ricardo Franco, and Jerome Busemeyer. Concluding, I will suggest that an active dialogue between cognitive approaches to logic and semantics and the modern approach of quantum information science is mandatory.

  15. Fishnet model for failure probability tail of nacre-like imbricated lamellar materials

    NASA Astrophysics Data System (ADS)

    Luo, Wen; Bažant, Zdeněk P.

    2017-12-01

    Nacre, the iridescent material of the shells of pearl oysters and abalone, consists mostly of aragonite (a form of CaCO3), a brittle constituent of relatively low strength (≈10 MPa). Yet it has astonishing mean tensile strength (≈150 MPa) and fracture energy (≈350 to 1,240 J/m2). The reasons have recently become well understood: (i) the nanoscale thickness (≈300 nm) of nacre's building blocks, the aragonite lamellae (or platelets), and (ii) the imbricated, or staggered, arrangement of these lamellea, bound by biopolymer layers only ≈25 nm thick, occupying <5% of volume. These properties inspire manmade biomimetic materials. For engineering applications, however, the failure probability of ≤10-6 is generally required. To guarantee it, the type of probability density function (pdf) of strength, including its tail, must be determined. This objective, not pursued previously, is hardly achievable by experiments alone, since >10^8 tests of specimens would be needed. Here we outline a statistical model of strength that resembles a fishnet pulled diagonally, captures the tail of pdf of strength and, importantly, allows analytical safety assessments of nacreous materials. The analysis shows that, in terms of safety, the imbricated lamellar structure provides a major additional advantage—˜10% strength increase at tail failure probability 10^-6 and a 1 to 2 orders of magnitude tail probability decrease at fixed stress. Another advantage is that a high scatter of microstructure properties diminishes the strength difference between the mean and the probability tail, compared with the weakest link model. These advantages of nacre-like materials are here justified analytically and supported by millions of Monte Carlo simulations.

  16. An evolutionary model of bounded rationality and intelligence.

    PubMed

    Brennan, Thomas J; Lo, Andrew W

    2012-01-01

    Most economic theories are based on the premise that individuals maximize their own self-interest and correctly incorporate the structure of their environment into all decisions, thanks to human intelligence. The influence of this paradigm goes far beyond academia-it underlies current macroeconomic and monetary policies, and is also an integral part of existing financial regulations. However, there is mounting empirical and experimental evidence, including the recent financial crisis, suggesting that humans do not always behave rationally, but often make seemingly random and suboptimal decisions. Here we propose to reconcile these contradictory perspectives by developing a simple binary-choice model that takes evolutionary consequences of decisions into account as well as the role of intelligence, which we define as any ability of an individual to increase its genetic success. If no intelligence is present, our model produces results consistent with prior literature and shows that risks that are independent across individuals in a generation generally lead to risk-neutral behaviors, but that risks that are correlated across a generation can lead to behaviors such as risk aversion, loss aversion, probability matching, and randomization. When intelligence is present the nature of risk also matters, and we show that even when risks are independent, either risk-neutral behavior or probability matching will occur depending upon the cost of intelligence in terms of reproductive success. In the case of correlated risks, we derive an implicit formula that shows how intelligence can emerge via selection, why it may be bounded, and how such bounds typically imply the coexistence of multiple levels and types of intelligence as a reflection of varying environmental conditions. Rational economic behavior in which individuals maximize their own self interest is only one of many possible types of behavior that arise from natural selection. The key to understanding which types of behavior are more likely to survive is how behavior affects reproductive success in a given population's environment. From this perspective, intelligence is naturally defined as behavior that increases the probability of reproductive success, and bounds on rationality are determined by physiological and environmental constraints.

  17. An Evolutionary Model of Bounded Rationality and Intelligence

    PubMed Central

    Brennan, Thomas J.; Lo, Andrew W.

    2012-01-01

    Background Most economic theories are based on the premise that individuals maximize their own self-interest and correctly incorporate the structure of their environment into all decisions, thanks to human intelligence. The influence of this paradigm goes far beyond academia–it underlies current macroeconomic and monetary policies, and is also an integral part of existing financial regulations. However, there is mounting empirical and experimental evidence, including the recent financial crisis, suggesting that humans do not always behave rationally, but often make seemingly random and suboptimal decisions. Methods and Findings Here we propose to reconcile these contradictory perspectives by developing a simple binary-choice model that takes evolutionary consequences of decisions into account as well as the role of intelligence, which we define as any ability of an individual to increase its genetic success. If no intelligence is present, our model produces results consistent with prior literature and shows that risks that are independent across individuals in a generation generally lead to risk-neutral behaviors, but that risks that are correlated across a generation can lead to behaviors such as risk aversion, loss aversion, probability matching, and randomization. When intelligence is present the nature of risk also matters, and we show that even when risks are independent, either risk-neutral behavior or probability matching will occur depending upon the cost of intelligence in terms of reproductive success. In the case of correlated risks, we derive an implicit formula that shows how intelligence can emerge via selection, why it may be bounded, and how such bounds typically imply the coexistence of multiple levels and types of intelligence as a reflection of varying environmental conditions. Conclusions Rational economic behavior in which individuals maximize their own self interest is only one of many possible types of behavior that arise from natural selection. The key to understanding which types of behavior are more likely to survive is how behavior affects reproductive success in a given population’s environment. From this perspective, intelligence is naturally defined as behavior that increases the probability of reproductive success, and bounds on rationality are determined by physiological and environmental constraints. PMID:23185602

  18. Estimating rate constants from single ion channel currents when the initial distribution is known.

    PubMed

    The, Yu-Kai; Fernandez, Jacqueline; Popa, M Oana; Lerche, Holger; Timmer, Jens

    2005-06-01

    Single ion channel currents can be analysed by hidden or aggregated Markov models. A classical result from Fredkin et al. (Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer, vol I, pp 269-289, 1985) states that the maximum number of identifiable parameters is bounded by 2n(o)n(c), where n(o) and n(c) denote the number of open and closed states, respectively. We show that this bound can be overcome when the probabilities of the initial distribution are known and the data consist of several sweeps.

  19. Heterogeneous losses of externally generated I atoms for OIL

    NASA Astrophysics Data System (ADS)

    Torbin, A. P.; Mikheyev, P. A.; Ufimtsev, N. I.; Voronov, A. I.; Azyazov, V. N.

    2012-01-01

    Usage of an external iodine atom generator can improve energy efficiency of the oxygen-iodine laser (OIL) and expand its range of operation parameters. However, a noticeable part of iodine atoms may recombine or undergo chemical bonding during transportation from the generator to the injection point. Experimental results reported in this paper showed that uncoated aluminum surfaces readily bounded iodine atoms, while nickel, stainless steel, Teflon or Plexiglas did not. Estimations based on experimental results had shown that the upper bound of probability of surface iodine atom recombination for materials Teflon, Plexiglas, nickel or stainless steel is γrec <= 10-5.

  20. Class-specific Error Bounds for Ensemble Classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prenger, R; Lemmond, T; Varshney, K

    2009-10-06

    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missedmore » detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.« less

  1. The probability of quantal secretion near a single calcium channel of an active zone.

    PubMed Central

    Bennett, M R; Farnell, L; Gibson, W G

    2000-01-01

    A Monte Carlo analysis has been made of calcium dynamics and quantal secretion at microdomains in which the calcium reaches very high concentrations over distances of <50 nm from a channel and for which calcium dynamics are dominated by diffusion. The kinetics of calcium ions in microdomains due to either the spontaneous or evoked opening of a calcium channel, both of which are stochastic events, are described in the presence of endogenous fixed and mobile buffers. Fluctuations in the number of calcium ions within 50 nm of a channel are considerable, with the standard deviation about half the mean. Within 10 nm of a channel these numbers of ions can give rise to calcium concentrations of the order of 100 microM. The temporal changes in free calcium and calcium bound to different affinity indicators in the volume of an entire varicosity or bouton following the opening of a single channel are also determined. A Monte Carlo analysis is also presented of how the dynamics of calcium ions at active zones, after the arrival of an action potential and the stochastic opening of a calcium channel, determine the probability of exocytosis from docked vesicles near the channel. The synaptic vesicles in active zones are found docked in a complex with their calcium-sensor associated proteins and a voltage-sensitive calcium channel, forming a secretory unit. The probability of quantal secretion from an isolated secretory unit has been determined for different distances of an open calcium channel from the calcium sensor within an individual unit: a threefold decrease in the probability of secretion of a quantum occurs with a doubling of the distance from 25 to 50 nm. The Monte Carlo analysis also shows that the probability of secretion of a quantum is most sensitive to the size of the single-channel current compared with its sensitivity to either the binding rates of the sites on the calcium-sensor protein or to the number of these sites that must bind a calcium ion to trigger exocytosis of a vesicle. PMID:10777721

  2. Effect of H2 binding on the nonadiabatic transition probability between singlet and triplet states of the [NiFe]-hydrogenase active site.

    PubMed

    Kaliakin, Danil S; Zaari, Ryan R; Varganov, Sergey A

    2015-02-12

    We investigate the effect of H2 binding on the spin-forbidden nonadiabatic transition probability between the lowest energy singlet and triplet electronic states of [NiFe]-hydrogenase active site model, using a velocity averaged Landau-Zener theory. Density functional and multireference perturbation theories were used to provide parameters for the Landau-Zener calculations. It was found that variation of the torsion angle between the terminal thiolate ligands around the Ni center induces an intersystem crossing between the lowest energy singlet and triplet electronic states in the bare active site and in the active site with bound H2. Potential energy curves between the singlet and triplet minima along the torsion angle and H2 binding energies to the two spin states were calculated. Upon H2 binding to the active site, there is a decrease in the torsion angle at the minimum energy crossing point between the singlet and triplet states. The probability of nonadiabatic transitions at temperatures between 270 and 370 K ranges from 35% to 32% for the active site with bound H2 and from 42% to 38% for the bare active site, thus indicating the importance of spin-forbidden nonadiabatic pathways for H2 binding on the [NiFe]-hydrogenase active site.

  3. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  4. Statistical methods for identifying and bounding a UXO target area or minefield

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.

    2003-09-18

    The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less

  5. Ionospheric Anomalies on the day of the Devastating Earthquakes during 2000-2012

    NASA Astrophysics Data System (ADS)

    Su, Fanfan; Zhou, Yiyan; Zhu, Fuying

    2013-04-01

    The study of the ionospheric abnormal changes during the large earthquakes has attracted much attention for many years. Many papers have reported the deviations of Total Electron Content (TEC) around the epicenter. The statistical analysis concludes that the anomalous behavior of TEC is related with the earthquakes with high probability[1]. But the special cases have different features[2][3]. In this study, we carry out a new statistical analysis to investigate the nature of the ionospheric anomalies during the devastating earthquakes. To demonstrate the abnormal changes of the ionospheric TEC, we have examined the TEC database from the Global Ionosphere Map (GIM). The GIM ( ftp://cddisa.gsfc.nasa.gov/pub/gps/products/ionex) includes about 200 of worldwide ground-based receivers of the GPS. The TEC data with resolution of 5° longitude and 2.5° latitude are routinely published in a 2-h time interval. The information of earthquakes is obtained from the USGS ( http://earthquake.usgs.gov/earthquakes/eqarchives/epic/). To avoid the interference of the magnetic storm, the days with Dst≤-20 nT are excluded. Finally, a total of 13 M≥8.0 earthquakes in the global area during 2000-2012 are selected. The 27 days before the main shock are treated as the background days. Here, 27-day TEC median (Me) and the standard deviation (σ) are used to detect the variation of TEC. We set the upper bound BU = Me + 3*σ, and the lower bound BL = Me - 3*σ. Therefore the probability of a new TEC in the interval (BL, BU) is approximately 99.7%. If TEC varies between BU and BL, the deviation (DTEC) equals zero. Otherwise, the deviations between TEC and bounds are calculated as DTEC = BU/BL - TEC. From the deviations, the positive and negative abnormal changes of TEC can be evaluated. We investigate temporal and spatial signatures of the ionospheric anomalies on the day of the devastating earthquakes(M≥8.0). The results show that the occurrence rates of positive anomaly and negative anomaly are almost equal. The most significant anomaly on the day may occur at the time very close to the main shock, but sometimes it is not the case. The positions of the maximal deviations always deviate from the epicenter. The direction may be southeast, southwest, northeast or northwest with the almost equal probability. The anomalies may move to the epicenter, deviate to any direction, or stay at the same position and gradually fade out. There is no significant feature, such as occurrence time, position, or motion, and so on, which can indicate the source of the anomalies. References: [1].Le, H., J. Y. Liu, et al. (2011). "A statistical analysis of ionospheric anomalies before 736 M6.0+earthquakes during 2002-2010." J. Geophys. Res. 116. [2].Liu, J. Y., Y. I. Chen, et al. (2009). "Seismoionospheric GPS total electron content anomalies observed before the 12 May 2008 Mw7.9 Wenchuan earthquake." J. Geophys. Res. 114. [3].Rolland, L. M., P. Lognonne, et al. (2011). "Detection and modeling of Rayleigh wave induced patterns in the ionosphere." J. Geophys. Res. 116.

  6. Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors

    NASA Astrophysics Data System (ADS)

    Slater, David M.; Jacyna, Garry M.

    2013-05-01

    In an effort to secure the northern and southern United States borders, MITRE has been tasked with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness (MOE) for current/future surveillance systems deployed by the the Customs and Border Protection Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors (UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle's trajectory such as bearing and speed. The first MOP considered is the probability of detection. We derive probabilities of detection for a network of sensors over an arbitrary number of observation periods and explore how the probability of detection changes when multiple sensors are employed. The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters in two cases: when no a priori information is known and when the parameters are assumed to be Gaussian with known variances. Sample results show that UGS perform significantly better in the latter case.

  7. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  8. Bounded rationality alters the dynamics of paediatric immunization acceptance.

    PubMed

    Oraby, Tamer; Bauch, Chris T

    2015-06-02

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure "rational" decision model that are often described as "bounded rationality". However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted.

  9. Bounded rationality alters the dynamics of paediatric immunization acceptance

    PubMed Central

    Oraby, Tamer; Bauch, Chris T.

    2015-01-01

    Interactions between disease dynamics and vaccinating behavior have been explored in many coupled behavior-disease models. Cognitive effects such as risk perception, framing, and subjective probabilities of adverse events can be important determinants of the vaccinating behaviour, and represent departures from the pure “rational” decision model that are often described as “bounded rationality”. However, the impact of such cognitive effects in the context of paediatric infectious disease vaccines has received relatively little attention. Here, we develop a disease-behavior model that accounts for bounded rationality through prospect theory. We analyze the model and compare its predictions to a reduced model that lacks bounded rationality. We find that, in general, introducing bounded rationality increases the dynamical richness of the model and makes it harder to eliminate a paediatric infectious disease. In contrast, in other cases, a low cost, highly efficacious vaccine can be refused, even when the rational decision model predicts acceptance. Injunctive social norms can prevent vaccine refusal, if vaccine acceptance is sufficiently high in the beginning of the vaccination campaign. Cognitive processes can have major impacts on the predictions of behaviour-disease models, and further study of such processes in the context of vaccination is thus warranted. PMID:26035413

  10. Towards a Fuzzy Bayesian Network Based Approach for Safety Risk Analysis of Tunnel-Induced Pipeline Damage.

    PubMed

    Zhang, Limao; Wu, Xianguo; Qin, Yawei; Skibniewski, Miroslaw J; Liu, Wenli

    2016-02-01

    Tunneling excavation is bound to produce significant disturbances to surrounding environments, and the tunnel-induced damage to adjacent underground buried pipelines is of considerable importance for geotechnical practice. A fuzzy Bayesian networks (FBNs) based approach for safety risk analysis is developed in this article with detailed step-by-step procedures, consisting of risk mechanism analysis, the FBN model establishment, fuzzification, FBN-based inference, defuzzification, and decision making. In accordance with the failure mechanism analysis, a tunnel-induced pipeline damage model is proposed to reveal the cause-effect relationships between the pipeline damage and its influential variables. In terms of the fuzzification process, an expert confidence indicator is proposed to reveal the reliability of the data when determining the fuzzy probability of occurrence of basic events, with both the judgment ability level and the subjectivity reliability level taken into account. By means of the fuzzy Bayesian inference, the approach proposed in this article is capable of calculating the probability distribution of potential safety risks and identifying the most likely potential causes of accidents under both prior knowledge and given evidence circumstances. A case concerning the safety analysis of underground buried pipelines adjacent to the construction of the Wuhan Yangtze River Tunnel is presented. The results demonstrate the feasibility of the proposed FBN approach and its application potential. The proposed approach can be used as a decision tool to provide support for safety assurance and management in tunnel construction, and thus increase the likelihood of a successful project in a complex project environment. © 2015 Society for Risk Analysis.

  11. Non-resonant excitation of rare-earth ions via virtual Auger process

    NASA Astrophysics Data System (ADS)

    Yassievich, I. N.

    2011-05-01

    The luminescence of rare-earth ions (REI) is often intensified by defects associated with REIs or excitons bound to these defects. In this paper we show that the presence of such a state opens the possibility of non-resonance optical pumping via the process involving virtual Auger transition. It is the second order perturbation process when an electron arrives in an virtual intermediate state due to the optical transition (the first step) and the Auger transition is the second one. We have calculated the cross-section of such an excitation process when the optical transition is accompanied by creation of the exciton bound to the defect associated with REI and obtained a simple analytical expression for the cross-section. The excess energy of the excitation quanta is taken away by multiphonon emission. The electron-phonon interaction with local phonon vibrations of the bound exciton is assumed to determine the multiphonon process. It is shown that the probability of the process under study exceeds considerably the probability of direct optical 4f-4f absorption even in the case when the energy distance between the excitation quantum energy and the exciton energy is about 0.1 of the exciton energy. The excitation mechanism considered leads to the appearance of a broad unsymmetrical band in the excitation spectrum with the red side much wider and flatter than the blue one.

  12. Fast Nonparametric Machine Learning Algorithms for High-Dimensional Massive Data and Applications

    DTIC Science & Technology

    2006-03-01

    know the probability of that from Lemma 2. Using the union bound, we know that for any query q, the probability that i-am-feeling-lucky search algorithm...and each point in a d-dimensional space, a naive k-NN search needs to do a linear scan of T for every single query q, and thus the computational time...algorithm based on partition trees with priority search , and give an expected query time O((1/)d log n). But the constant in the O((1/)d log n

  13. Deterministic and unambiguous dense coding

    NASA Astrophysics Data System (ADS)

    Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.

    2006-04-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.

  14. Removing cosmic spikes using a hyperspectral upper-bound spectrum method

    DOE PAGES

    Anthony, Stephen Michael; Timlin, Jerilyn A.

    2016-11-04

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less

  15. Removing cosmic spikes using a hyperspectral upper-bound spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anthony, Stephen Michael; Timlin, Jerilyn A.

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in amore » hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. As a result, a comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.« less

  16. Removing Cosmic Spikes Using a Hyperspectral Upper-Bound Spectrum Method.

    PubMed

    Anthony, Stephen M; Timlin, Jerilyn A

    2017-03-01

    Cosmic ray spikes are especially problematic for hyperspectral imaging because of the large number of spikes often present and their negative effects upon subsequent chemometric analysis. Fortunately, while the large number of spectra acquired in a hyperspectral imaging data set increases the probability and number of cosmic spikes observed, the multitude of spectra can also aid in the effective recognition and removal of the cosmic spikes. Zhang and Ben-Amotz were perhaps the first to leverage the additional spatial dimension of hyperspectral data matrices (DM). They integrated principal component analysis (PCA) into the upper bound spectrum method (UBS), resulting in a hybrid method (UBS-DM) for hyperspectral images. Here, we expand upon their use of PCA, recognizing that principal components primarily present in only a few pixels most likely correspond to cosmic spikes. Eliminating the contribution of those principal components in those pixels improves the cosmic spike removal. Both simulated and experimental hyperspectral Raman image data sets are used to test the newly developed UBS-DM-hyperspectral (UBS-DM-HS) method which extends the UBS-DM method by leveraging characteristics of hyperspectral data sets. A comparison is provided between the performance of the UBS-DM-HS method and other methods suitable for despiking hyperspectral images, evaluating both their ability to remove cosmic ray spikes and the extent to which they introduce spectral bias.

  17. A procedure for estimating the frequency distribution of CO levels in the micro-region of a highway.

    DOT National Transportation Integrated Search

    1979-01-01

    This report demonstrates that the probability of violating a "not to be exceeded more than once per year", one-hour air quality standard can be bounded from above. This result represents a significant improvement over previous methods of ascertaining...

  18. Causes of Effects and Effects of Causes

    ERIC Educational Resources Information Center

    Pearl, Judea

    2015-01-01

    This article summarizes a conceptual framework and simple mathematical methods of estimating the probability that one event was a necessary cause of another, as interpreted by lawmakers. We show that the fusion of observational and experimental data can yield informative bounds that, under certain circumstances, meet legal criteria of causation.…

  19. Tail mean and related robust solution concepts

    NASA Astrophysics Data System (ADS)

    Ogryczak, Włodzimierz

    2014-01-01

    Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.

  20. On the security of compressed encryption with partial unitary sensing matrices embedding a secret keystream

    NASA Astrophysics Data System (ADS)

    Yu, Nam Yul

    2017-12-01

    The principle of compressed sensing (CS) can be applied in a cryptosystem by providing the notion of security. In this paper, we study the computational security of a CS-based cryptosystem that encrypts a plaintext with a partial unitary sensing matrix embedding a secret keystream. The keystream is obtained by a keystream generator of stream ciphers, where the initial seed becomes the secret key of the CS-based cryptosystem. For security analysis, the total variation distance, bounded by the relative entropy and the Hellinger distance, is examined as a security measure for the indistinguishability. By developing upper bounds on the distance measures, we show that the CS-based cryptosystem can be computationally secure in terms of the indistinguishability, as long as the keystream length for each encryption is sufficiently large with low compression and sparsity ratios. In addition, we consider a potential chosen plaintext attack (CPA) from an adversary, which attempts to recover the key of the CS-based cryptosystem. Associated with the key recovery attack, we show that the computational security of our CS-based cryptosystem is brought by the mathematical intractability of a constrained integer least-squares (ILS) problem. For a sub-optimal, but feasible key recovery attack, we consider a successive approximate maximum-likelihood detection (SAMD) and investigate the performance by developing an upper bound on the success probability. Through theoretical and numerical analyses, we demonstrate that our CS-based cryptosystem can be secure against the key recovery attack through the SAMD.

  1. [How reliable is the monitoring for doping?].

    PubMed

    Hüsler, J

    1990-12-01

    The reliability of the dope control, of the chemical analysis of the urine probes in the accredited laboratories and their decisions, is discussed using probabilistic and statistical methods. Basically, we evaluated and estimated the positive predictive value which means the probability that an urine probe contains prohibited dope substances given a positive test decision. Since there are not statistical data and evidence for some important quantities in relation to the predictive value, an exact evaluation is not possible, only conservative, lower bounds can be given. We found that the predictive value is at least 90% or 95% with respect to the analysis and decision based on the A-probe only, and at least 99% with respect to both A- and B-probes. A more realistic observation, but without sufficient statistical confidence, points to the fact that the true predictive value is significantly larger than these lower estimates.

  2. Multistate and multihypothesis discrimination with open quantum systems

    NASA Astrophysics Data System (ADS)

    Kiilerich, Alexander Holm; Mølmer, Klaus

    2018-05-01

    We show how an upper bound for the ability to discriminate any number N of candidates for the Hamiltonian governing the evolution of an open quantum system may be calculated by numerically efficient means. Our method applies an effective master-equation analysis to evaluate the pairwise overlaps between candidate full states of the system and its environment pertaining to the Hamiltonians. These overlaps are then used to construct an N -dimensional representation of the states. The optimal positive-operator valued measure (POVM) and the corresponding probability of assigning a false hypothesis may subsequently be evaluated by phrasing optimal discrimination of multiple nonorthogonal quantum states as a semidefinite programming problem. We provide three realistic examples of multihypothesis testing with open quantum systems.

  3. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  4. Global behavior analysis for stochastic system of 1,3-PD continuous fermentation

    NASA Astrophysics Data System (ADS)

    Zhu, Xi; Kliemann, Wolfgang; Li, Chunfa; Feng, Enmin; Xiu, Zhilong

    2017-12-01

    Global behavior for stochastic system of continuous fermentation in glycerol bio-dissimilation to 1,3-propanediol by Klebsiella pneumoniae is analyzed in this paper. This bioprocess cannot avoid the stochastic perturbation caused by internal and external disturbance which reflect on the growth rate. These negative factors can limit and degrade the achievable performance of controlled systems. Based on multiplicity phenomena, the equilibriums and bifurcations of the deterministic system are analyzed. Then, a stochastic model is presented by a bounded Markov diffusion process. In order to analyze the global behavior, we compute the control sets for the associated control system. The probability distributions of relative supports are also computed. The simulation results indicate that how the disturbed biosystem tend to stationary behavior globally.

  5. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  6. Computer simulation results for bounds on the effective conductivity of composite media

    NASA Astrophysics Data System (ADS)

    Smith, P. A.; Torquato, S.

    1989-02-01

    This paper studies the determination of third- and fourth-order bounds on the effective conductivity σe of a composite material composed of aligned, infinitely long, identical, partially penetrable, circular cylinders of conductivity σ2 randomly distributed throughout a matrix of conductivity σ1. Both bounds involve the microstructural parameter ζ2 which is a multifold integral that depends upon S3, the three-point probability function of the composite. This key integral ζ2 is computed (for the possible range of cylinder volume fraction φ2) using a Monte Carlo simulation technique for the penetrable-concentric-shell model in which cylinders are distributed with an arbitrary degree of impenetrability λ, 0≤λ≤1. Results for the limiting cases λ=0 (``fully penetrable'' or randomly centered cylinders) and λ=1 (``totally impenetrable'' cylinders) compare very favorably with theoretical predictions made by Torquato and Beasley [Int. J. Eng. Sci. 24, 415 (1986)] and by Torquato and Lado [Proc. R. Soc. London Ser. A 417, 59 (1988)], respectively. Results are also reported for intermediate values of λ: cases which heretofore have not been examined. For a wide range of α=σ2/σ1 (conductivity ratio) and φ2, the third-order bounds on σe significantly improve upon second-order bounds which just depend upon φ2. The fourth-order bounds are, in turn, narrower than the third-order bounds. Moreover, when the cylinders are highly conducting (α≫1), the fourth-order lower bound provides an excellent estimate of the effective conductivity for a wide range of volume fractions.

  7. The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.

    PubMed

    Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin

    2018-04-05

    This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.

  8. Bayesian Networks for enterprise risk assessment

    NASA Astrophysics Data System (ADS)

    Bonafede, C. E.; Giudici, P.

    2007-08-01

    According to different typologies of activity and priority, risks can assume diverse meanings and it can be assessed in different ways. Risk, in general, is measured in terms of a probability combination of an event (frequency) and its consequence (impact). To estimate the frequency and the impact (severity) historical data or expert opinions (either qualitative or quantitative data) are used. Moreover, qualitative data must be converted in numerical values or bounds to be used in the model. In the case of enterprise risk assessment the considered risks are, for instance, strategic, operational, legal and of image, which many times are difficult to be quantified. So in most cases only expert data, gathered by scorecard approaches, are available for risk analysis. The Bayesian Networks (BNs) are a useful tool to integrate different information and in particular to study the risk's joint distribution by using data collected from experts. In this paper we want to show a possible approach for building a BN in the particular case in which only prior probabilities of node states and marginal correlations between nodes are available, and when the variables have only two states.

  9. Computational Aspects of N-Mixture Models

    PubMed Central

    Dennis, Emily B; Morgan, Byron JT; Ridout, Martin S

    2015-01-01

    The N-mixture model is widely used to estimate the abundance of a population in the presence of unknown detection probability from only a set of counts subject to spatial and temporal replication (Royle, 2004, Biometrics 60, 105–115). We explain and exploit the equivalence of N-mixture and multivariate Poisson and negative-binomial models, which provides powerful new approaches for fitting these models. We show that particularly when detection probability and the number of sampling occasions are small, infinite estimates of abundance can arise. We propose a sample covariance as a diagnostic for this event, and demonstrate its good performance in the Poisson case. Infinite estimates may be missed in practice, due to numerical optimization procedures terminating at arbitrarily large values. It is shown that the use of a bound, K, for an infinite summation in the N-mixture likelihood can result in underestimation of abundance, so that default values of K in computer packages should be avoided. Instead we propose a simple automatic way to choose K. The methods are illustrated by analysis of data on Hermann's tortoise Testudo hermanni. PMID:25314629

  10. Probabilistic Thermal Analysis During Mars Reconnaissance Orbiter Aerobraking

    NASA Technical Reports Server (NTRS)

    Dec, John A.

    2007-01-01

    A method for performing a probabilistic thermal analysis during aerobraking has been developed. The analysis is performed on the Mars Reconnaissance Orbiter solar array during aerobraking. The methodology makes use of a response surface model derived from a more complex finite element thermal model of the solar array. The response surface is a quadratic equation which calculates the peak temperature for a given orbit drag pass at a specific location on the solar panel. Five different response surface equations are used, one of which predicts the overall maximum solar panel temperature, and the remaining four predict the temperatures of the solar panel thermal sensors. The variables used to define the response surface can be characterized as either environmental, material property, or modeling variables. Response surface variables are statistically varied in a Monte Carlo simulation. The Monte Carlo simulation produces mean temperatures and 3 sigma bounds as well as the probability of exceeding the designated flight allowable temperature for a given orbit. Response surface temperature predictions are compared with the Mars Reconnaissance Orbiter flight temperature data.

  11. Calculations of reliability predictions for the Apollo spacecraft

    NASA Technical Reports Server (NTRS)

    Amstadter, B. L.

    1966-01-01

    A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.

  12. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  13. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  14. Optimum measurement for unambiguously discriminating two mixed states: General considerations and special cases

    NASA Astrophysics Data System (ADS)

    Herzog, Ulrike; Bergou, János A.

    2006-04-01

    Based on our previous publication [U. Herzog and J. A. Bergou, Phys. Rev. A 71, 050301(R)(2005)] we investigate the optimum measurement for the unambiguous discrimination of two mixed quantum states that occur with given prior probabilities. Unambiguous discrimination of nonorthogonal states is possible in a probabilistic way, at the expense of a nonzero probability of inconclusive results, where the measurement fails. Along with a discussion of the general problem, we give an example illustrating our method of solution. We also provide general inequalities for the minimum achievable failure probability and discuss in more detail the necessary conditions that must be fulfilled when its absolute lower bound, proportional to the fidelity of the states, can be reached.

  15. Safe Onboard Guidance and Control Under Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars James

    2011-01-01

    An algorithm was developed that determines the fuel-optimal spacecraft guidance trajectory that takes into account uncertainty, in order to guarantee that mission safety constraints are satisfied with the required probability. The algorithm uses convex optimization to solve for the optimal trajectory. Convex optimization is amenable to onboard solution due to its excellent convergence properties. The algorithm is novel because, unlike prior approaches, it does not require time-consuming evaluation of multivariate probability densities. Instead, it uses a new mathematical bounding approach to ensure that probability constraints are satisfied, and it is shown that the resulting optimization is convex. Empirical results show that the approach is many orders of magnitude less conservative than existing set conversion techniques, for a small penalty in computation time.

  16. A Framework for Bounding Nonlocality of State Discrimination

    NASA Astrophysics Data System (ADS)

    Childs, Andrew M.; Leung, Debbie; Mančinska, Laura; Ozols, Maris

    2013-11-01

    We consider the class of protocols that can be implemented by local quantum operations and classical communication (LOCC) between two parties. In particular, we focus on the task of discriminating a known set of quantum states by LOCC. Building on the work in the paper Quantum nonlocality without entanglement (Bennett et al., Phys Rev A 59:1070-1091, 1999), we provide a framework for bounding the amount of nonlocality in a given set of bipartite quantum states in terms of a lower bound on the probability of error in any LOCC discrimination protocol. We apply our framework to an orthonormal product basis known as the domino states and obtain an alternative and simplified proof that quantifies its nonlocality. We generalize this result for similar bases in larger dimensions, as well as the “rotated” domino states, resolving a long-standing open question (Bennett et al., Phys Rev A 59:1070-1091, 1999).

  17. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Aiming; Rajashankar, Kanagalaghatta R.; Patel, Dinshaw J.

    Significant advances in our understanding of RNA architecture, folding and recognition have emerged from structure-function studies on riboswitches, non-coding RNAs whose sensing domains bind small ligands and whose adjacent expression platforms contain RNA elements involved in the control of gene regulation. We now report on the ligand-bound structure of the Thermotoga petrophila fluoride riboswitch, which adopts a higher-order RNA architecture stabilized by pseudoknot and long-range reversed Watson-Crick and Hoogsteen A {sm_bullet} U pair formation. The bound fluoride ion is encapsulated within the junctional architecture, anchored in place through direct coordination to three Mg{sup 2+} ions, which in turn are octahedrallymore » coordinated to water molecules and five inwardly pointing backbone phosphates. Our structure of the fluoride riboswitch in the bound state shows how RNA can form a binding pocket selective for fluoride, while discriminating against larger halide ions. The T. petrophila fluoride riboswitch probably functions in gene regulation through a transcription termination mechanism.« less

  19. Analytic Confusion Matrix Bounds for Fault Detection and Isolation Using a Sum-of-Squared- Residuals Approach

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2009-01-01

    Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.

  20. Hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis method for mid-frequency analysis of built-up systems with epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan

    2017-09-01

    Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.

  1. Silver-Polyimide Nanocomposite Films: Single-Stage Synthesis and Analysis of Metalized Partially-Fluorinated Polyimide BTDA/4-BDAF Prepared from Silver(I) Complexes

    NASA Astrophysics Data System (ADS)

    Abelard, Joshua Erold Robert

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  2. Development and Application of Pyrolysis Gas Chromatography/Mass Spectrometry for the Analysis of Bound Trinitrotoluene Residues in Soil

    USGS Publications Warehouse

    Weiss, J.M.; Mckay, A.J.; Derito, C.; Watanabe, C.; Thorn, K.A.; Madsen, E.L.

    2004-01-01

    TNT (trinitrotoluene) is a contaminant of global environmental significance, yet determining its environmental fate has posed longstanding challenges. To date, only differential extraction-based approaches have been able to determine the presence of covalently bound, reduced forms of TNT in field soils. Here, we employed thermal elution, pyrolysis, and gas chromatography/mass spectrometry (GC/MS) to distinguish between covalently bound and noncovalently bound reduced forms of TNT in soil. Model soil organic matter-based matrixes were used to develop an assay in which noncovalently bound (monomeric) aminodinitrotoluene (ADNT) and diaminonitrotoluene (DANT) were desorbed from the matrix and analyzed at a lower temperature than covalently bound forms of these same compounds. A thermal desorption technique, evolved gas analysis, was initially employed to differentiate between covalently bound and added 15N-labeled monomeric compounds. A refined thermal elution procedure, termed "double-shot analysis" (DSA), allowed a sample to be sequentially analyzed in two phases. In phase 1, all of an added 15N-labeled monomeric contaminant was eluted from the sample at relatively low temperature. In phase 2 during high-temperature pyrolysis, the remaining covalently bound contaminants were detected. DSA analysis of soil from the Louisiana Army Ammunition Plant (LAAP; ???5000 ppm TNT) revealed the presence of DANT, ADNT, and TNT. After scrutinizing the DSA data and comparing them to results from solvent-extracted and base/acid-hydrolyzed LAAP soil, we concluded that the TNT was a noncovalently bound "carryover" from phase 1. Thus, the pyrolysis-GC/MS technique successfully defined covalently bound pools of ADNT and DANT in the field soil sample.

  3. Partially Identifying Treatment Effects with an Application to Covering the Uninsured

    ERIC Educational Resources Information Center

    Kreider, Brent; Hill, Steven C.

    2009-01-01

    We extend the nonparametric literature on partially identified probability distributions and use our analytical results to provide sharp bounds on the impact of universal health insurance on provider visits and medical expenditures. Our approach accounts for uncertainty about the reliability of self-reported insurance status as well as uncertainty…

  4. Comparing Performance of Methods to Deal with Differential Attrition in Lottery Based Evaluations

    ERIC Educational Resources Information Center

    Zamarro, Gema; Anderson, Kaitlin; Steele, Jennifer; Miller, Trey

    2016-01-01

    The purpose of this study is to study the performance of different methods (inverse probability weighting and estimation of informative bounds) to control for differential attrition by comparing the results of different methods using two datasets: an original dataset from Portland Public Schools (PPS) subject to high rates of differential…

  5. Some Factor Analytic Approximations to Latent Class Structure.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Denton, William T.

    Three procedures, alpha, image, and uniqueness rescaling, were applied to a joint occurrence probability matrix. That matrix was the basis of a well-known latent class structure. The values of the recurring subscript elements were varied as follows: Case 1 - The known elements were input; Case 2 - The upper bounds to the recurring subscript…

  6. Teaching Qualitative Energy-Eigenfunction Shape with Physlets

    ERIC Educational Resources Information Center

    Belloni, Mario; Christian, Wolfgang; Cox, Anne J.

    2007-01-01

    More than 35 years ago, French and Taylor outlined an approach to teach students and teachers alike how to understand "qualitative plots of bound-state wave functions." They described five fundamental statements based on the quantum-mechanical concepts of probability and energy (total and potential), which could be used to deduce the shape of…

  7. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  8. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  9. Decentralized Routing and Diameter Bounds in Entangled Quantum Networks

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2017-04-01

    Entangled quantum networks are a necessity for any future quantum internet, long-distance quantum key distribution, and quantum repeater networks. The entangled quantum nodes can communicate through several different levels of entanglement, leading to a heterogeneous, multi-level entangled network structure. The level of entanglement between the quantum nodes determines the hop distance, the number of spanned nodes, and the probability of the existence of an entangled link in the network. In this work we define a decentralized routing for entangled quantum networks. We show that the probability distribution of the entangled links can be modeled by a specific distribution in a base-graph. The results allow us to perform efficient routing to find the shortest paths in entangled quantum networks by using only local knowledge of the quantum nodes. We give bounds on the maximum value of the total number of entangled links of a path. The proposed scheme can be directly applied in practical quantum communications and quantum networking scenarios. This work was partially supported by the Hungarian Scientific Research Fund - OTKA K-112125.

  10. Cross-stream migration of active particles

    NASA Astrophysics Data System (ADS)

    Uspal, William; Katuri, Jaideep; Simmchen, Juliane; Miguel-Lopez, Albert; Sanchez, Samuel

    For natural microswimmers, the interplay of swimming activity and external flow can promote robust directed motion, e.g. propulsion against (upstream rheotaxis) or perpendicular to the direction of flow. These effects are generally attributed to their complex body shapes and flagellar beat patterns. Here, using catalytic Janus particles as a model system, we report on a strong directional response that naturally emerges for spherical active particles in a channel flow. The particles align their propulsion axis to be perpendicular to both the direction of flow and the normal vector of a nearby bounding surface. We develop a deterministic theoretical model that captures this spontaneous transverse orientational order. We show how the directional response emerges from the interplay of external shear flow and swimmer/surface interactions (e.g., hydrodynamic interactions) that originate in swimming activity. Finally, adding the effect of thermal noise, we obtain probability distributions for the swimmer orientation that show good agreement with the experimental probability distributions. Our findings show that the qualitative response of microswimmers to flow is sensitive to the detailed interaction between individual microswimmers and bounding surfaces.

  11. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao

    1991-01-01

    Various types of multistage decoding for multilevel block modulation codes, in which the decoding of a component code at each stage can be either soft decision or hard decision, maximum likelihood or bounded distance are discussed. Error performance for codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. It was found that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. It was found that the difference in performance between the suboptimum multi-stage soft decision maximum likelihood decoding of a modulation code and the single stage optimum decoding of the overall code is very small, only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  12. Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less

  13. Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems

    DOE PAGES

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2017-03-31

    This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less

  14. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  15. Key management and encryption under the bounded storage model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.

    2005-11-01

    There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less

  16. Limitations of shallow nets approximation.

    PubMed

    Lin, Shao-Bo

    2017-10-01

    In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Applying the log-normal distribution to target detection

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    1992-09-01

    Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.

  18. Narrow Escape of Interacting Diffusing Particles

    NASA Astrophysics Data System (ADS)

    Agranov, Tal; Meerson, Baruch

    2018-03-01

    The narrow escape problem deals with the calculation of the mean escape time (MET) of a Brownian particle from a bounded domain through a small hole on the domain's boundary. Here we develop a formalism which allows us to evaluate the nonescape probability of a gas of diffusing particles that may interact with each other. In some cases the nonescape probability allows us to evaluate the MET of the first particle. The formalism is based on the fluctuating hydrodynamics and the recently developed macroscopic fluctuation theory. We also uncover an unexpected connection between the narrow escape of interacting particles and thermal runaway in chemical reactors.

  19. Structurally conserved water molecules in ribonuclease T1.

    PubMed

    Malin, R; Zielenkiewicz, P; Saenger, W

    1991-03-15

    In the high resolution (1.7-1.9 A) crystal structures of ribonuclease T1 (RNase T1) in complex with guanosine, guanosine 2'-phosphate, guanylyl 2',5'-guanosine, and vanadate, there are 30 water sites in nearly identical (+/- 1 A) positions that are considered conserved. One water is tightly bound to Asp76(O delta), Thr93(O gamma), Cys6(O), and Asn9(N); another bridges two loops by hydrogen-bonding to Tyr68(O eta) and to Ser35(N), Asn36(N); a loop structure is stabilized by two waters coordinated to Gly31(O) and His27(N delta), and by water bound to cis-Pro39(O). Most notable is a hydrogen-bonded chain of 10 water molecules. Waters 1-5 of this chain are inaccessible to solvent, are anchored at Trp59(N), and stitch together the loop formed by segments 60-68; waters 5-8 coordinate to Ca2+, and waters 9 and 10 hydrogen-bond to N-terminal side chains of the alpha-helix. The water chain and two conserved water molecules are bound to amino acids adjacent to the active site residues His40, Glu58, Arg77, and His92; they are probably involved in maintaining their spatial orientation required for catalysis. Water sites must be considered in genetic engineering; the mutation Trp59Tyr, which probably influences the 10-water chain, doubles the catalytic activity of RNase T1.

  20. Estimating inelastic heavy-particle - hydrogen collision data. II. Simplified model for ionic collisions and application to barium-hydrogen ionic collisions

    NASA Astrophysics Data System (ADS)

    Belyaev, Andrey K.; Yakovleva, Svetlana A.

    2017-12-01

    Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33

  1. Estimating inelastic heavy-particle-hydrogen collision data. I. Simplified model and application to potassium-hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Belyaev, Andrey K.; Yakovleva, Svetlana A.

    2017-10-01

    Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147

  2. Tight Bounds for Minimax Grid Matching, with Applications to the Average Case Analysis of Algorithms.

    DTIC Science & Technology

    1986-05-01

    AD-ft?l 552 TIGHT BOUNDS FOR NININAX GRID MATCHING WITH i APPLICATIONS TO THE AVERAGE C.. (U) MASSACHUSETTS INST OF TECH CAMBRIDGE LAS FOR COMPUTER...MASSACHUSETTS LABORATORYFORNSTITUTE OF COMPUTER SCIENCE TECHNOLOGY MIT/LCS/TM-298 TIGHT BOUNDS FOR MINIMAX GRID MATCHING, WITH APPLICATIONS TO THE AVERAGE...PERIOD COVERED Tight bounds for minimax grid matching, Interim research with applications to the average case May 1986 analysis of algorithms. 6

  3. Application of Bounded Linear Stability Analysis Method for Metrics-Driven Adaptive Control

    NASA Technical Reports Server (NTRS)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics-driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a second order system that represents a pitch attitude control of a generic transport aircraft. The analysis shows that the system with the metrics-conforming variable adaptive gain becomes more robust to unmodeled dynamics or time delay. The effect of analysis time-window for BLSA is also evaluated in order to meet the stability margin criteria.

  4. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  5. Phosphorous digestibility and activity of intestinal phytase in hybrid tilapia, Oreochromis niloticus X O. aureus

    USGS Publications Warehouse

    La Vorgna, M.W.; Hafez, Y.; Hughes, S.G.; Handwerker, T.

    2003-01-01

    Experiments were conducted to determine the degree to which phytate-bound phosphorus from plant protein sources could be used by hybrid tilapia (Oreochromis niloticus X O. aureus). Utilizing an inert marker technique with chromic oxide, hybrid tilapia in our study were effective at utilizing both inorganic and phytate phosphorus as evidenced by average apparent digestibility values of 93.2% and 90.0% for total and phytate phosphorus, respectively. Analysis of the intestinal brush border membrane of the tilapia revealed enzyme activity that was capable of hydrolyzing phytic acid. The presence of phytic acid hydrolyzing enzyme activity in the intestinal brush border provides a probable mechanism by which these hybrid tilapia are able to utilize phytate phosphorus effectively. ?? 2003 by The Haworth Press, Inc. All rights reserved.

  6. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  7. Performance of cellular frequency-hopped spread-spectrum radio networks

    NASA Astrophysics Data System (ADS)

    Gluck, Jeffrey W.; Geraniotis, Evaggelos

    1989-10-01

    Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.

  8. Security bound of cheat sensitive quantum bit commitment.

    PubMed

    He, Guang Ping

    2015-03-23

    Cheat sensitive quantum bit commitment (CSQBC) loosens the security requirement of quantum bit commitment (QBC), so that the existing impossibility proofs of unconditionally secure QBC can be evaded. But here we analyze the common features in all existing CSQBC protocols, and show that in any CSQBC having these features, the receiver can always learn a non-trivial amount of information on the sender's committed bit before it is unveiled, while his cheating can pass the security check with a probability not less than 50%. The sender's cheating is also studied. The optimal CSQBC protocols that can minimize the sum of the cheating probabilities of both parties are found to be trivial, as they are practically useless. We also discuss the possibility of building a fair protocol in which both parties can cheat with equal probabilities.

  9. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  10. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  11. Adrenergic receptors in frontal cortex in human brain.

    PubMed

    Cash, R; Raisman, R; Ruberg, M; Agid, Y

    1985-02-05

    The binding of three adrenergic ligands ([3H]prazosin, [3H]clonidine, [3H]dihydroalprenolol) was studied in the frontal cortex of human brain. alpha 1-Receptors, labeled by [3H]prazosin, predominated. [3H]Clonidine bound to two classes of sites, one of high affinity and one of low affinity. Guanosine triphosphate appeared to lower the affinity of [3H]clonidine for its receptor. [3H]Dihydroalprenolol bound to three classes of sites: the beta 1-receptor, the beta 2-receptor and a receptor with low affinity which represented about 40% of the total binding, but which was probably a non-specific site; the beta 1/beta 2 ratio was 1/2.

  12. Probabilistic Sensitivity Analysis with Respect to Bounds of Truncated Distributions (PREPRINT)

    DTIC Science & Technology

    2010-04-01

    AFRL-RX-WP-TP-2010-4147 PROBABILISTIC SENSITIVITY ANALYSIS WITH RESPECT TO BOUNDS OF TRUNCATED DISTRIBUTIONS (PREPRINT) H. Millwater and...5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62102F 6. AUTHOR(S) H. Millwater and Y. Feng 5d. PROJECT...Z39-18 1 Probabilistic Sensitivity Analysis with respect to Bounds of Truncated Distributions H. Millwater and Y. Feng Department of Mechanical

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    PIEPHO, M.G.

    Four bounding accidents postulated for the K West Basin integrated water treatment system are evaluated against applicable risk evaluation guidelines. The accidents are a spray leak during fuel retrieval, spray leak during backflushing a hydrogen explosion, and a fire breaching filter vessel and enclosure. Event trees and accident probabilities are estimated. In all cases, the unmitigated dose consequences are below the risk evaluation guidelines.

  14. The price of privately releasing contingency tables, and the spectra of random matrices with correlated rows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasiviswanathan, Shiva; Rudelson, Mark; Smith, Adam

    2009-01-01

    Contingency tables are the method of choice of government agencies for releasing statistical summaries of categorical data. In this paper, we consider lower bounds on how much distortion (noise) is necessary in these tables to provide privacy guarantees when the data being summarized is sensitive. We extend a line of recent work on lower bounds on noise for private data analysis [10, 13. 14, 15] to a natural and important class of functionalities. Our investigation also leads to new results on the spectra of random matrices with correlated rows. Consider a database D consisting of n rows (one per individual),more » each row comprising d binary attributes. For any subset of T attributes of size |T| = k, the marginal table for T has 2{sup k} entries; each entry counts how many times in the database a particular setting of these attributes occurs. Imagine an agency that wishes to release all (d/k) contingency tables for a given database. For constant k, previous work showed that distortion {tilde {Omicron}}(min{l_brace}n, (n{sup 2}d){sup 1/3}, {radical}d{sup k}{r_brace}) is sufficient for satisfying differential privacy, a rigorous definition of privacy that has received extensive recent study. Our main contributions are: (1) For {epsilon}- and ({epsilon}, {delta})-differential privacy (with {epsilon} constant and {delta} = 1/poly(n)), we give a lower bound of {tilde {Omega}}(min{l_brace}{radical}n, {radical}d{sup k}{r_brace}), which is tight for n = {tilde {Omega}}(d{sup k}). Moreover, for a natural and popular class of mechanisms based on additive noise, our bound can be strengthened to {Omega}({radical}d{sup k}), which is tight for all n. Our bounds extend even to non-constant k, losing roughly a factor of {radical}2{sup k} compared to the best known upper bounds for large n. (2) We give efficient polynomial time attacks which allow an adversary to reconstruct sensitive infonnation given insufficiently perturbed contingency table releases. For constant k, we obtain a lower bound of {tilde {Omega}}(min{l_brace}{radical}n, {radical}d{sup k}{r_brace}) that applies to a large class of privacy notions, including K-anonymity (along with its variants) and differential privacy. In contrast to our bounds for differential privacy, this bound (a) is shown only for constant k, but (b) is tight for all values of n when k is constant. (3) Our reconstruction-based attacks require a new lower bound on the least singular values of random matrices with correlated rows. For a constant k, consider a matrix M with (d/k) rows which are formed by taking all possible k-way entry-wise products of an underlying set of d random vectors. We show that even for nearly square matrices with d{sup k}/log d columns, the least singular value is {Omega}({radical}d{sup k}) with high probability - asymptotically, the same bound as one gets for a matrix with independent rows. The proof requires several new ideas for analyzing random matrices and could be of independent interest.« less

  15. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    DOE PAGES

    Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; ...

    2015-11-13

    Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peakmore » clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This study presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. Finally, in order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.« less

  16. Synthesis, characterization, and binding assessment with human serum albumin of three bipyridine lanthanide(III) complexes.

    PubMed

    Aramesh-Boroujeni, Zahra; Bordbar, Abdol-Khalegh; Khorasani-Motlagh, Mozhgan; Sattarinezhad, Elham; Fani, Najme; Noroozifar, Meissam

    2018-05-18

    In this work, the terbium(III), dysprosium(III), and ytterbium(III) complexes containing 2, 2'-bipyridine (bpy) ligand have been synthesized and characterized using CHN elemental analysis, FT-IR, UV-Vis and 1 H-NMR techniques and their binding behavior with human serum albumin (HSA) was studied by UV-Vis, fluorescence and molecular docking examinations. The experimental data indicated that all three lanthanide complexes have high binding affinity to HSA with effective quenching of HSA fluorescence via static mechanism. The binding parameters, the type of interaction, the value of resonance energy transfer, and the binding distance between complexes and HSA were estimated from the analysis of fluorescence measurements and Förster theory. The thermodynamic parameters suggested that van der Waals interactions and hydrogen bonds play an important role in the binding mechanism. While, the energy transfer from HSA molecules to all these complexes occurs with high probability, the order of binding constants (BpyTb > BpyDy > BpyYb) represents the importance of radius of Ln 3+ ion in the complex-HSA interaction. The results of molecular docking calculation and competitive experiments assessed site 3 of HSA, located in subdomain IB, as the most probable binding site for these ligands and also indicated the microenvironment residues around the bound mentioned complexes. The computational results kept in good agreement with experimental data.

  17. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  18. Incompleteness and limit of security theory of quantum key distribution

    NASA Astrophysics Data System (ADS)

    Hirota, Osamu; Murakami, Dan; Kato, Kentaro; Futami, Fumio

    2012-10-01

    It is claimed in the many papers that a trace distance: d guarantees the universal composition security in quantum key distribution (QKD) like BB84 protocol. In this introduction paper, at first, it is explicitly explained what is the main misconception in the claim of the unconditional security for QKD theory. In general terms, the cause of the misunderstanding on the security claim is the Lemma in the paper of Renner. It suggests that the generation of the perfect random key is assured by the probability (1-d), and its failure probability is d. Thus, it concludes that the generated key provides the perfect random key sequence when the protocol is success. So the QKD provides perfect secrecy to the one time pad. This is the reason for the composition claim. However, the quantity of the trace distance (or variational distance) is not the probability for such an event. If d is not small enough, always the generated key sequence is not uniform. Now one needs the reconstruction of the evaluation of the trace distance if one wants to use it. One should first go back to the indistinguishability theory in the computational complexity based, and to clarify the meaning of the value of the variational distance. In addition, the same analysis for the information theoretic case is necessary. The recent serial papers by H.P.Yuen have given the answer on such questions. In this paper, we show more concise description of Yuen's theory, and clarify that the upper bound theories for the trace distance by Tomamichel et al and Hayashi et al are constructed by the wrong reasoning of Renner and it is unsuitable as the security analysis. Finally, we introduce a new macroscopic quantum communication to replace Q-bit QKD.

  19. Beating Landauer's Bound: Tradeoff between Accuracy and Heat Dissipation

    NASA Astrophysics Data System (ADS)

    Talukdar, Saurav; Bhaban, Shreyas; Salapaka, Murti

    The Landauer's Principle states that erasing of one bit of stored information is necessarily accompanied by heat dissipation of at least kb Tln 2 per bit. However, this is true only if the erasure process is always successful. We demonstrate that if the erasure process has a success probability p, the minimum heat dissipation per bit is given by kb T(plnp + (1 - p) ln (1 - p) + ln 2), referred to as the Generalized Landauer Bound, which is kb Tln 2 if the erasure process is always successful and decreases to zero as p reduces to 0.5. We present a model for a one-bit memory based on a Brownian particle in a double well potential motivated from optical tweezers and achieve erasure by manipulation of the optical fields. The method uniquely provides with a handle on the success proportion of the erasure. The thermodynamics framework for Langevin dynamics developed by Sekimoto is used for computation of heat dissipation in each realization of the erasure process. Using extensive Monte Carlo simulations, we demonstrate that the Landauer Bound of kb Tln 2 is violated by compromising on the success of the erasure process, while validating the existence of the Generalized Landauer Bound.

  20. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  1. A Sequence-Dependent DNA Condensation Induced by Prion Protein

    PubMed Central

    2018-01-01

    Different studies indicated that the prion protein induces hybridization of complementary DNA strands. Cell culture studies showed that the scrapie isoform of prion protein remained bound with the chromosome. In present work, we used an oxazole dye, YOYO, as a reporter to quantitative characterization of the DNA condensation by prion protein. We observe that the prion protein induces greater fluorescence quenching of YOYO intercalated in DNA containing only GC bases compared to the DNA containing four bases whereas the effect of dye bound to DNA containing only AT bases is marginal. DNA-condensing biological polyamines are less effective than prion protein in quenching of DNA-bound YOYO fluorescence. The prion protein induces marginal quenching of fluorescence of the dye bound to oligonucleotides, which are resistant to condensation. The ultrastructural studies with electron microscope also validate the biophysical data. The GC bases of the target DNA are probably responsible for increased condensation in the presence of prion protein. To our knowledge, this is the first report of a human cellular protein inducing a sequence-dependent DNA condensation. The increased condensation of GC-rich DNA by prion protein may suggest a biological function of the prion protein and a role in its pathogenesis. PMID:29657864

  2. A Sequence-Dependent DNA Condensation Induced by Prion Protein.

    PubMed

    Bera, Alakesh; Biring, Sajal

    2018-01-01

    Different studies indicated that the prion protein induces hybridization of complementary DNA strands. Cell culture studies showed that the scrapie isoform of prion protein remained bound with the chromosome. In present work, we used an oxazole dye, YOYO, as a reporter to quantitative characterization of the DNA condensation by prion protein. We observe that the prion protein induces greater fluorescence quenching of YOYO intercalated in DNA containing only GC bases compared to the DNA containing four bases whereas the effect of dye bound to DNA containing only AT bases is marginal. DNA-condensing biological polyamines are less effective than prion protein in quenching of DNA-bound YOYO fluorescence. The prion protein induces marginal quenching of fluorescence of the dye bound to oligonucleotides, which are resistant to condensation. The ultrastructural studies with electron microscope also validate the biophysical data. The GC bases of the target DNA are probably responsible for increased condensation in the presence of prion protein. To our knowledge, this is the first report of a human cellular protein inducing a sequence-dependent DNA condensation. The increased condensation of GC-rich DNA by prion protein may suggest a biological function of the prion protein and a role in its pathogenesis.

  3. Fast Atom Ionization in Strong Electromagnetic Radiation

    NASA Astrophysics Data System (ADS)

    Apostol, M.

    2018-05-01

    The Goeppert-Mayer and Kramers-Henneberger transformations are examined for bound charges placed in electromagnetic radiation in the non-relativistic approximation. The consistent inclusion of the interaction with the radiation field provides the time evolution of the wavefunction with both structural interaction (which ensures the bound state) and electromagnetic interaction. It is shown that in a short time after switching on the high-intensity radiation the bound charges are set free. In these conditions, a statistical criterion is used to estimate the rate of atom ionization. The results correspond to a sudden application of the electromagnetic interaction, in contrast with the well-known ionization probability obtained by quasi-classical tunneling through classically unavailable non-stationary states, or other equivalent methods, where the interaction is introduced adiabatically. For low-intensity radiation the charges oscillate and emit higher-order harmonics, the charge configuration is re-arranged and the process is resumed. Tunneling ionization may appear in these circumstances. Extension of the approach to other applications involving radiation-induced charge emission from bound states is discussed, like ionization of molecules, atomic clusters or proton emission from atomic nuclei. Also, results for a static electric field are included.

  4. Initial Radionuclide Inventories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, H

    The purpose of this analysis is to provide an initial radionuclide inventory (in grams per waste package) and associated uncertainty distributions for use in the Total System Performance Assessment for the License Application (TSPA-LA) in support of the license application for the repository at Yucca Mountain, Nevada. This document is intended for use in postclosure analysis only. Bounding waste stream information and data were collected that capture probable limits. For commercially generated waste, this analysis considers alternative waste stream projections to bound the characteristics of wastes likely to be encountered using arrival scenarios that potentially impact the commercial spent nuclearmore » fuel (CSNF) waste stream. For TSPA-LA, this radionuclide inventory analysis considers U.S. Department of Energy (DOE) high-level radioactive waste (DHLW) glass and two types of spent nuclear fuel (SNF): CSNF and DOE-owned (DSNF). These wastes are placed in two groups of waste packages: the CSNF waste package and the codisposal waste package (CDSP), which are designated to contain DHLW glass and DSNF, or DHLW glass only. The radionuclide inventory for naval SNF is provided separately in the classified ''Naval Nuclear Propulsion Program Technical Support Document'' for the License Application. As noted previously, the radionuclide inventory data presented here is intended only for TSPA-LA postclosure calculations. It is not applicable to preclosure safety calculations. Safe storage, transportation, and ultimate disposal of these wastes require safety analyses to support the design and licensing of repository equipment and facilities. These analyses will require radionuclide inventories to represent the radioactive source term that must be accommodated during handling, storage and disposition of these wastes. This analysis uses the best available information to identify the radionuclide inventory that is expected at the last year of last emplacement, currently identified as 2030 and 2033, depending on the type of waste. TSPA-LA uses the results of this analysis to decay the inventory to the year of repository closure projected for the year of 2060.« less

  5. Effect of elevation on extreme precipitation of short durations: evidences of orographic signature on the parameters of Depth-Duration-Frequency curves

    NASA Astrophysics Data System (ADS)

    Avanzi, Francesco; De Michele, Carlo; Gabriele, Salvatore; Ghezzi, Antonio; Rosso, Renzo

    2015-04-01

    Here, we show how atmospheric circulation and topography rule the variability of depth-duration-frequency (DDF) curves parameters, and we discuss how this variability has physical implications on the formation of extreme precipitations at high elevations. A DDF is a curve ruling the value of the maximum annual precipitation H as a function of duration D and the level of probability F. We consider around 1500 stations over the Italian territory, with at least 20 years of data of maximum annual precipitation depth at different durations. We estimated the DDF parameters at each location by using the asymptotic distribution of extreme values, i.e. the so-called Generalized Extreme Value (GEV) distribution, and considering a statistical simple scale invariance hypothesis. Consequently, a DDF curve depends on five different parameters. A first set relates H with the duration (namely, the mean value of annual maximum precipitation depth for unit duration and the scaling exponent), while a second set links H to F (namely, a scale, position and shape parameter). The value of the shape parameter has consequences on the type of random variable (unbounded, upper or lower bounded). This extensive analysis shows that the variability of the mean value of annual maximum precipitation depth for unit duration obeys to the coupled effect of topography and modal direction of moisture flux during extreme events. Median values of this parameter decrease with elevation. We called this phenomenon "reverse orographic effect" on extreme precipitation of short durations, since it is in contrast with general knowledge about the orographic effect on mean precipitation. Moreover, the scaling exponent is mainly driven by topography alone (with increasing values of this parameter at increasing elevations). Therefore, the quantiles of H(D,F) at durations greater than unit turn to be more variable at high elevations than at low elevations. Additionally, the analysis of the variability of the shape parameter with elevation shows that extreme events at high elevations appear to be distributed according to an upper bounded probability distribution. These evidences could be a characteristic sign of the formation of extreme precipitation events at high elevations.

  6. The Radiation, Interplanetary Shocks, and Coronal Sources (RISCS) Toolset

    NASA Technical Reports Server (NTRS)

    Zank, G. P.; Spann, J.

    2014-01-01

    We outline a plan to develop a physics based predictive toolset RISCS to describe the interplanetary energetic particle and radiation environment throughout the inner heliosphere, including at the Earth. To forecast and "nowcast" the radiation environment requires the fusing of three components: 1) the ability to provide probabilities for incipient solar activity; 2) the use of these probabilities and daily coronal and solar wind observations to model the 3D spatial and temporal heliosphere, including magnetic field structure and transients, within 10 AU; and 3) the ability to model the acceleration and transport of energetic particles based on current and anticipated coronal and heliospheric conditions. We describe how to address 1) - 3) based on our existing, well developed, and validated codes and models. The goal of RISCS toolset is to provide an operational forecast and "nowcast" capability that will a) predict solar energetic particle (SEP) intensities; b) spectra for protons and heavy ions; c) predict maximum energies and their duration; d) SEP composition; e) cosmic ray intensities, and f) plasma parameters, including shock arrival times, strength and obliquity at any given heliospheric location and time. The toolset would have a 72 hour predicative capability, with associated probabilistic bounds, that would be updated hourly thereafter to improve the predicted event(s) and reduce the associated probability bounds. The RISCS toolset would be highly adaptable and portable, capable of running on a variety of platforms to accommodate various operational needs and requirements.

  7. Systematic Onset of Periodic Patterns in Random Disk Packings

    NASA Astrophysics Data System (ADS)

    Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A. C.

    2018-04-01

    We report evidence of a surprising systematic onset of periodic patterns in very tall piles of disks deposited randomly between rigid walls. Independently of the pile width, periodic structures are always observed in monodisperse deposits containing up to 1 07 disks. The probability density function of the lengths of disordered transient phases that precede the onset of periodicity displays an approximately exponential tail. These disordered transients may become very large when the channel width grows without bound. For narrow channels, the probability density of finding periodic patterns of a given period displays a series of discrete peaks, which, however, are washed out completely when the channel width grows.

  8. Stabilization and Structure of wave packets in Rydberg atoms ionized by a strong light field.

    PubMed

    Fedorov, M; Fedorov, S

    1998-09-28

    New features of the phenomenon of interference stabilization of Rydberg atoms are found to exist. The main of them are: (i) dynamical stabilization, which means that in case of pulses with a smooth envelope the time-dependent residual probability for an atom to survive in bound states remains almost constant in the middle part of a pulse (at the strongest fields); (ii) existence of the strong-field stabilization of the after-pulse residual probability in case of pulses longer than the classical Kepler period; and (iii) pulsation of the time-dependent Rydberg wave packet formed in the process of photoionization.

  9. What if? Exploring the multiverse through Euclidean wormholes.

    PubMed

    Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador

    2017-01-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.

  10. What if? Exploring the multiverse through Euclidean wormholes

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Krämer, Manuel; Morais, João; Robles-Pérez, Salvador

    2017-10-01

    We present Euclidean wormhole solutions describing possible bridges within the multiverse. The study is carried out in the framework of third quantisation. The matter content is modelled through a scalar field which supports the existence of a whole collection of universes. The instanton solutions describe Euclidean solutions that connect baby universes with asymptotically de Sitter universes. We compute the tunnelling probability of these processes. Considering the current bounds on the energy scale of inflation and assuming that all the baby universes are nucleated with the same probability, we draw some conclusions about which universes are more likely to tunnel and therefore undergo a standard inflationary era.

  11. Radiative transition of hydrogen-like ions in quantum plasma

    NASA Astrophysics Data System (ADS)

    Hu, Hongwei; Chen, Zhanbin; Chen, Wencong

    2016-12-01

    At fusion plasma electron temperature and number density regimes of 1 × 103-1 × 107 K and 1 × 1028-1 × 1031/m3, respectively, the excited states and radiative transition of hydrogen-like ions in fusion plasmas are studied. The results show that quantum plasma model is more suitable to describe the fusion plasma than the Debye screening model. Relativistic correction to bound-state energies of the low-Z hydrogen-like ions is so small that it can be ignored. The transition probability decreases with plasma density, but the transition probabilities have the same order of magnitude in the same number density regime.

  12. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  13. Performance of DPSK with convolutional encoding on time-varying fading channels

    NASA Technical Reports Server (NTRS)

    Mui, S. Y.; Modestino, J. W.

    1977-01-01

    The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.

  14. [Scientific basis in the setting of residue limits for veterinary drugs in food of animal origin taking into account the presence of their metabolites].

    PubMed

    Mitsumori, K

    1993-01-01

    Maximum residue level (MRL) for veterinary drugs in food of animal origin has been proposed by FAO/WHO, as a new evaluation procedure taking into account the presence of metabolites for the regulation of veterinary drug residues. The MRL is the maximum concentration of residue resulting from the use of a veterinary drug that is recommended to be legally permitted as acceptable in a food. It is established from the Acceptable Daily Intake (ADI) obtained from the data of toxicological studies, the residue concentration of the drug when used according to good practice in the use of veterinary drugs, and the lowest level consistent with the practical analytical methods available for routine residue analysis. Among the veterinary drugs, some chemicals contain a large amount of bound residues that are neither extractable from tissues by the analytical method identical with that used in parent chemicals. Especially, the bioavailable residues which are probably absorbed when the food is ingested are of great toxicological concern. In this case, the FAO/WHO recommends that the MRL can be established after the calculation of daily intake of residues of toxicological concern by the addition of both the extractable and bioavailable bound residues.

  15. The First Detection of [O IV] from an Ultraluminous X-ray Source with Spitzer. II. Evidence for High Luminosity in Holmberg II ULX

    NASA Technical Reports Server (NTRS)

    Berghea, C. T.; Dudik, R. P.; Weaver, K. A.; Kallman, T. R.

    2009-01-01

    This is the second of two papers examining Spitzer Infrared Spectrograph (IRS) observations of the ultraluminous X-ray source (ULX) in Holmberg II. Here we perform detailed photoionization modeling of they infrared lines. Our analysis suggests that the luminosity and morphology of the [O IV] 25.89 micron emission line is consistent with photoionization by the soft X-ray and far ultraviolet (FUV) radiation from the accretion disk of the binary system and inconsistent with narrow beaming. We show that the emission nebula is matter-bounded both in the line of sight direction and to the east, and probably radiation-bounded to the west. A bolometric luminosity in excess of 1040 erg per second would be needed to produce the measured [O IV] flux. We use modeling and previously published studies to conclude that shacks likely contribute very little, if at all, to the high excitation line fluxes observed in the Holmberg II ULX. Additionally, we find that the spectral type of the companion star has a surprisingly strong effect on they predicted strength of the [O IV] emission. This finding could explain the origin of [O IV] hi some starburst systems containing black hole binaries.

  16. The First Detection of [O IV] from an Ultraluminous X-ray Source with Spitzer. 2; Evidence for High Luminosity in Holmberg II ULX

    NASA Technical Reports Server (NTRS)

    Berghea, C. T.; Dudik, R. P.; Weaver, K. A.; Kallman, T. R.

    2009-01-01

    This is the second of two papers examining Spitzer Infrared Spectrograph (IRS) observations of the ultraluminous X-ray source (ULX) in Holmberg II. Here we perform detailed photoionization modeling of the infrared lines. Our analysis suggests that the luminosity and morphology of the [O IV] 25.89 micron emission line is consistent with photoionization by the soft X-ray and far ultraviolet (FUV) radiation from the accretion disk of the binary system and inconsistent with narrow beaming. We show that the emission nebula is matter-bounded both in the line of sight direction and to the east, and probably radiation-bounded to the west. A bolometric luminosity in excess of 10(exp 40) erg/s would be needed to produce the measured [O IV] flux. We use modeling and previously published studies to conclude that shocks likely contribute very little, if at all, to the high-excitation line fluxes observed in the Holmberg II ULX. Additionally, we find that the spectral type of the companion star has a surprisingly strong effect on the predicted strength of the [O IV] emission. This finding could explain the origin of [O IV] in some starburst systems containing black hole binaries.

  17. Ant-inspired density estimation via random walks.

    PubMed

    Musco, Cameron; Su, Hsin-Hao; Lynch, Nancy A

    2017-10-03

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks.

  18. Benefits and Costs of Pulp and Paper Effluent Controls Under the Clean Water Act

    NASA Astrophysics Data System (ADS)

    Luken, Ralph A.; Johnson, F. Reed; Kibler, Virginia

    1992-03-01

    This study quantifies local improvements in environmental quality from controlling effluents in the pulp and paper industry. Although it is confined to a single industry, this study is the first effort to assess the actual net benefits of the Clean Water Act pollution control program. An assessment of water quality benefits requires linking regulatory policy, technical effects, and behavioral responses. Regulatory policies mandate specific controls that influence the quantity and nature of effluent discharges. We identify a subset of stream segments suitable for analysis, describe water quality simulations and control cost calculations under alternative regulatory scenarios, assign feasible water uses to each segment based on water quality, and determine probable upper bounds for the willingness of beneficiaries to pay. Because the act imposes uniform regulations that do not account for differences in compliance costs, existing stream quality, contributions of other effluent sources, and recreation potential, the relation between water quality benefits and costs varies widely across sites. This variation suggests that significant positive net benefits have probably been achieved in some cases, but we conclude that the costs of the Clean Water Act as a whole exceed likely benefits by a significant margin.

  19. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  20. Making the Impossible Possible: Strategies for Fast POMDP Monitoring

    NASA Technical Reports Server (NTRS)

    Washington, Richard; Lau, Sonie (Technical Monitor)

    1998-01-01

    Systems modeled as partially observable Markov decision processes (POMDPs) can be tracked quickly with three restrictions: all actions are grouped together, the out-degree of each system state is bounded by a constant, and the number of non-zero elements in the belief state is bounded by a (different) constant. With these restrictions, the tracking algorithm operates in constant time and linear space. The first restriction assumes that the action itself is unobservable. The second restriction defines a subclass of POMDPs that covers however a wide range of problems. The third restriction is an approximation technique that can lead to a potentially vexing problem: an observation may be received that has probability according to the restricted belief state. This problem of impossibility will cause the belief state to collapse. In this paper we discuss the tradeoffs between the constant bound on the belief state and the quality of the solution. We concentrate on strategies for overcoming the impossibility problem and demonstrate initial experimental results that indicate promising directions.

  1. Nanoscale observation of local bound charges of patterned protein arrays by scanning force microscopy

    NASA Astrophysics Data System (ADS)

    Oh, Y. J.; Jo, W.; Kim, S.; Park, S.; Kim, Y. S.

    2008-09-01

    A protein patterned surface using micro-contact printing methods has been investigated by scanning force microscopy. Electrostatic force microscopy (EFM) was utilized for imaging the topography and detecting the electrical properties such as the local bound charge distribution of the patterned proteins. It was found that the patterned IgG proteins are arranged down to 1 µm, and the 90° rotation of patterned anti-IgG proteins was successfully undertaken. Through the estimation of the effective areas, it was possible to determine the local bound charges of patterned proteins which have opposite electrostatic force behaviors. Moreover, we studied the binding probability between IgG and anti-IgG in a 1 µm2 MIMIC system by topographic and electrostatic signals for applicable label-free detections. We showed that the patterned proteins can be used for immunoassay of proteins on the functional substrate, and that they can also be used for bioelectronics device application, indicating distinct advantages with regard to accuracy and a label-free detection.

  2. Selection of different reaction channels in 6Li induced fusion reaction by a powerful combination of a charged particle array and a high-resolution gamma spectrometer

    NASA Astrophysics Data System (ADS)

    Zhang, G. X.; Hu, S. P.; Zhang, G. L.; Zhang, H. Q.; Yao, Y. J.; Huang, Z.; Wang, M. L.; Sun, H. B.; Valiente-Dobòn, J. J.; Testov, D.; Goasduff, A.; John, P. R.; Siciliano, M.; Galtarosa, F.; Francesco, R.; Mengoni, D.; Bazzacco, D.; Li, E. T.; Hao, X.

    2018-05-01

    Investigation of the breakup and transfer effect of weakly bound nuclei on the fusion process has been an interesting research topic in the past several years. In comparison with radioactive ion beam (RIB), the beam intensities of stable weakly bound nuclei such as 6,7Li and 9Be, which have significant breakup probability, are orders of magnitude higher. Precise fusion measurements induced by these nuclei have already been performed. However, the conclusion of reaction dynamics was not clear and has contradiction. In order to have a proper understanding of the influence of breakup and transfer of weakly bound projectiles on the fusion process, the 6Li+89Y experiment with incident energies of 22 MeV and 34 MeV was performed on Galileo array in combination with Si-ball EUCLIDES at Legnaro National Laboratory (LNL) in Italy. Using the coincidence by the charged particles and γ-rays, the different reaction channels can be clearly identified.

  3. Adaptive Neural Tracking Control for Switched High-Order Stochastic Nonlinear Systems.

    PubMed

    Zhao, Xudong; Wang, Xinyong; Zong, Guangdeng; Zheng, Xiaolong

    2017-10-01

    This paper deals with adaptive neural tracking control design for a class of switched high-order stochastic nonlinear systems with unknown uncertainties and arbitrary deterministic switching. The considered issues are: 1) completely unknown uncertainties; 2) stochastic disturbances; and 3) high-order nonstrict-feedback system structure. The considered mathematical models can represent many practical systems in the actual engineering. By adopting the approximation ability of neural networks, common stochastic Lyapunov function method together with adding an improved power integrator technique, an adaptive state feedback controller with multiple adaptive laws is systematically designed for the systems. Subsequently, a controller with only two adaptive laws is proposed to solve the problem of over parameterization. Under the designed controllers, all the signals in the closed-loop system are bounded-input bounded-output stable in probability, and the system output can almost surely track the target trajectory within a specified bounded error. Finally, simulation results are presented to show the effectiveness of the proposed approaches.

  4. Scale-Invariant Transition Probabilities in Free Word Association Trajectories

    PubMed Central

    Costa, Martin Elias; Bonomo, Flavia; Sigman, Mariano

    2009-01-01

    Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16% of order-2 cycles) implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼7 steps) which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution. PMID:19826622

  5. Study design in high-dimensional classification analysis.

    PubMed

    Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen

    2016-10-01

    Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Markovian Anderson Model: Bounds for the Rate of Propagation

    NASA Astrophysics Data System (ADS)

    Tcheremchantsev, Serguei

    We consider the Anderson model in with potentials whose values at any site of the lattice are Markovian independent random functions of time. For solutions to the time-dependent Schrödinger equation we show under some conditions that with probability 1 where for d=1,2 and for .

  7. The Biharmonic Oscillator and Asymmetric Linear Potentials: From Classical Trajectories to Momentum-Space Probability Densities in the Extreme Quantum Limit

    ERIC Educational Resources Information Center

    Ruckle, L. J.; Belloni, M.; Robinett, R. W.

    2012-01-01

    The biharmonic oscillator and the asymmetric linear well are two confining power-law-type potentials for which complete bound-state solutions are possible in both classical and quantum mechanics. We examine these problems in detail, beginning with studies of their trajectories in position and momentum space, evaluation of the classical probability…

  8. Molecular indicators for palaeoenvironmental change in a Messinian evaporitic sequence (Vena del Gesso, Italy). II: High-resolution variations in abundances and 13C contents of free and sulphur-bound carbon skeletons in a single marl bed

    NASA Technical Reports Server (NTRS)

    Kenig, F.; Damste, J. S.; Frewin, N. L.; Hayes, J. M.; De Leeuw, J. W.

    1995-01-01

    The extractable organic matter of 10 immature samples from a marl bed of one evaporitic cycle of the Vena del Gesso sediments (Gessoso-solfifera Fm., Messinian, Italy) was analyzed quantitatively for free hydrocarbons and organic sulphur compounds. Nickel boride was used as a desulphurizing agent to recover sulphur-bound lipids from the polar and asphaltene fractions. Carbon isotopic compositions (delta vs PDB) of free hydrocarbons and of S-bound hydrocarbons were also measured. Relationships between these carbon skeletons, precursor biolipids, and the organisms producing them could then be examined. Concentrations of S-bound lipids and free hydrocarbons and their delta values were plotted vs depth in the marl bed and the profiles were interpreted in terms of variations in source organisms, 13 C contents of the carbon source, and environmentally induced changes in isotopic fractionation. The overall range of delta values measured was 24.7%, from -11.6% for a component derived from green sulphur bacteria (Chlorobiaceae) to -36.3% for a lipid derived from purple sulphur bacteria (Chromatiaceae). Deconvolution of mixtures of components deriving from multiple sources (green and purple sulphur bacteria, coccolithophorids, microalgae and higher plants) was sometimes possible because both quantitative and isotopic data were available and because either the free or S-bound pool sometimes appeared to contain material from a single source. Several free n-alkanes and S-bound lipids appeared to be specific products of upper-water-column primary producers (i.e. algae and cyanobacteria). Others derived from anaerobic photoautotrophs and from heterotrophic protozoa (ciliates), which apparently fed partly on Chlorobiaceae. Four groups of n-alkanes produced by algae or cyanobacteria were also recognized based on systematic variations of abundance and isotopic composition with depth. For hydrocarbons probably derived from microalgae, isotopic variations are well correlated with those of total organic carbon. A resistant aliphatic biomacromolecule produced by microalgae is, therefore, probably an important component of the kerogen. These variations reflect changes in the depositional environment and early diagenetic transformations. Changes in the concentrations of S-bound lipids induced by variations in conditions favourable for sulphurization were discriminated from those related to variations in primary producer assemblages. The water column of the lagoonal basin was stratified and photic zone anoxia occurred during the early and middle stages of marl deposition. During the last stage of the marl deposition the stratification collapsed due to a significant shallowing of the water column. Contributions from anaerobic photoautotrophs were apparently associated with variations in depth of the chemocline.

  9. Molecular indicators for palaeoenvironmental change in a Messinian evaporitic sequence (Vena del Gesso, Italy). II: High-resolution variations in abundances and 13C contents of free and sulphur-bound carbon skeletons in a single marl bed.

    PubMed

    Kenig, F; Damsté, J S; Frewin, N L; Hayes, J M; De Leeuw, J W

    1995-06-01

    The extractable organic matter of 10 immature samples from a marl bed of one evaporitic cycle of the Vena del Gesso sediments (Gessoso-solfifera Fm., Messinian, Italy) was analyzed quantitatively for free hydrocarbons and organic sulphur compounds. Nickel boride was used as a desulphurizing agent to recover sulphur-bound lipids from the polar and asphaltene fractions. Carbon isotopic compositions (delta vs PDB) of free hydrocarbons and of S-bound hydrocarbons were also measured. Relationships between these carbon skeletons, precursor biolipids, and the organisms producing them could then be examined. Concentrations of S-bound lipids and free hydrocarbons and their delta values were plotted vs depth in the marl bed and the profiles were interpreted in terms of variations in source organisms, 13 C contents of the carbon source, and environmentally induced changes in isotopic fractionation. The overall range of delta values measured was 24.7%, from -11.6% for a component derived from green sulphur bacteria (Chlorobiaceae) to -36.3% for a lipid derived from purple sulphur bacteria (Chromatiaceae). Deconvolution of mixtures of components deriving from multiple sources (green and purple sulphur bacteria, coccolithophorids, microalgae and higher plants) was sometimes possible because both quantitative and isotopic data were available and because either the free or S-bound pool sometimes appeared to contain material from a single source. Several free n-alkanes and S-bound lipids appeared to be specific products of upper-water-column primary producers (i.e. algae and cyanobacteria). Others derived from anaerobic photoautotrophs and from heterotrophic protozoa (ciliates), which apparently fed partly on Chlorobiaceae. Four groups of n-alkanes produced by algae or cyanobacteria were also recognized based on systematic variations of abundance and isotopic composition with depth. For hydrocarbons probably derived from microalgae, isotopic variations are well correlated with those of total organic carbon. A resistant aliphatic biomacromolecule produced by microalgae is, therefore, probably an important component of the kerogen. These variations reflect changes in the depositional environment and early diagenetic transformations. Changes in the concentrations of S-bound lipids induced by variations in conditions favourable for sulphurization were discriminated from those related to variations in primary producer assemblages. The water column of the lagoonal basin was stratified and photic zone anoxia occurred during the early and middle stages of marl deposition. During the last stage of the marl deposition the stratification collapsed due to a significant shallowing of the water column. Contributions from anaerobic photoautotrophs were apparently associated with variations in depth of the chemocline.

  10. Wavefunctions, quantum diffusion, and scaling exponents in golden-mean quasiperiodic tilings.

    PubMed

    Thiem, Stefanie; Schreiber, Michael

    2013-02-20

    We study the properties of wavefunctions and the wavepacket dynamics in quasiperiodic tight-binding models in one, two, and three dimensions. The atoms in the one-dimensional quasiperiodic chains are coupled by weak and strong bonds aligned according to the Fibonacci sequence. The associated d-dimensional quasiperiodic tilings are constructed from the direct product of d such chains, which yields either the hypercubic tiling or the labyrinth tiling. This approach allows us to consider fairly large systems numerically. We show that the wavefunctions of the system are multifractal and that their properties can be related to the structure of the system in the regime of strong quasiperiodic modulation by a renormalization group (RG) approach. We also study the dynamics of wavepackets to get information about the electronic transport properties. In particular, we investigate the scaling behaviour of the return probability of the wavepacket with time. Applying again the RG approach we show that in the regime of strong quasiperiodic modulation the return probability is governed by the underlying quasiperiodic structure. Further, we also discuss lower bounds for the scaling exponent of the width of the wavepacket and propose a modified lower bound for the absolute continuous regime.

  11. Multi-stage decoding for multi-level block modulation codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  12. Synthetic membrane-targeted antibiotics.

    PubMed

    Vooturi, S K; Firestine, S M

    2010-01-01

    Antimicrobial resistance continues to evolve and presents serious challenges in the therapy of both nosocomial and community-acquired infections. The rise of resistant strains like methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Staphylococcus aureus (VRSA) and vancomycin-resistant enterococci (VRE) suggests that antimicrobial resistance is an inevitable evolutionary response to antimicrobial use. This highlights the tremendous need for antibiotics against new bacterial targets. Agents that target the integrity of bacterial membrane are relatively novel in the clinical armamentarium. Daptomycin, a lipopeptide is a classical example of membrane-bound antibiotic. Nature has also utilized this tactic. Antimicrobial peptides (AMPs), which are found in all kingdoms, function primarily by permeabilizing the bacterial membrane. AMPs have several advantages over existing antibiotics including a broad spectrum of activity, rapid bactericidal activity, no cross-resistance with the existing antibiotics and a low probability for developing resistance. Currently, a small number of peptides have been developed for clinical use but therapeutic applications are limited because of poor bioavailability and high manufacturing cost. However, their broad specificity, potent activity and lower probability for resistance have spurred the search for synthetic mimetics of antimicrobial peptides as membrane-active antibiotics. In this review, we will discuss the different classes of synthetic membrane-bound antibiotics published since 2004.

  13. Low-dimensional Representation of Error Covariance

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan

    2000-01-01

    Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.

  14. Coherent exciton transport in dendrimers and continuous-time quantum walks

    NASA Astrophysics Data System (ADS)

    Mülken, Oliver; Bierbaum, Veronika; Blumen, Alexander

    2006-03-01

    We model coherent exciton transport in dendrimers by continuous-time quantum walks. For dendrimers up to the second generation the coherent transport shows perfect recurrences when the initial excitation starts at the central node. For larger dendrimers, the recurrence ceases to be perfect, a fact which resembles results for discrete quantum carpets. Moreover, depending on the initial excitation site, we find that the coherent transport to certain nodes of the dendrimer has a very low probability. When the initial excitation starts from the central node, the problem can be mapped onto a line which simplifies the computational effort. Furthermore, the long time average of the quantum mechanical transition probabilities between pairs of nodes shows characteristic patterns and allows us to classify the nodes into clusters with identical limiting probabilities. For the (space) average of the quantum mechanical probability to be still or to be again at the initial site, we obtain, based on the Cauchy-Schwarz inequality, a simple lower bound which depends only on the eigenvalue spectrum of the Hamiltonian.

  15. A modified hybrid uncertain analysis method for dynamic response field of the LSOAAC with random and interval parameters

    NASA Astrophysics Data System (ADS)

    Zi, Bin; Zhou, Bin

    2016-07-01

    For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .

  16. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  17. Preliminary geological assessment of the Northern edge of ultimi lobe, Mars South Polar layered deposits

    USGS Publications Warehouse

    Murray, B.; Koutnik, M.; Byrne, S.; Soderblom, L.; Herkenhoff, K.; Tanaka, K.L.

    2001-01-01

    We have examined the local base of the south polar layered deposits (SPLD) exposed in the bounding scarp near 72??-74??S, 215??- 230??W where there is a clear unconformable contact with older units. Sections of layering up to a kilometer thick were examined along the bounding scarp, permitting an estimate of the thinnest individual layers yet reported in the SPLD. Rhythmic layering is also present locally, suggesting a similarly rhythmic variation in environmental conditions and a recorded climate signal at least in some SPLD strata. Locally, angular unconformities may be present, as has been reported for the north polar layered deposits (NPLD) and may likewise imply intervals of subaerial erosion in the SPLD. The outcropping layers display a broad range of weathering styles and may reflect more diverse conditions of depositions, erosion, and diagenesis than might have been expected from simple aeolian depostion modulated only by astronomically driven climatic fluctuations. An unexpected finding of our study is the presence of locally abundant small pits close to the bounding scarp. These quasi-circular, negative, rimless features probably originated as impact craters and were modified to varying degrees by local endogenic processes, as well as locally variable blanketing. A nominal exposure age for the most heavily cratered region in our study area is about 2 million years, and the crater statistics appear consistent with those for the overall SPLD, although there are large uncertainties in the absolute ages implied by the crater size-frequency statistics, as in all martian crater ages. Another new finding is the presence of mass wasting features along the steepest portion of the retreating bounding scarp as well as a number of examples of brittle fracture, consistent with large-scale slumping along the bounding scarp and probably also ancient basal sliding. Both subhorizontal and high angle faults appear to be exposed in the bounding scarp, but the dips of the faults are poorly constrained. These fractures, along with the relatively undeformed layers between them, suggest to us that whatever horizontal motion may have taken place outward from the central cap region was accomplished by ancient basal sliding rather than large-scale glacial-like flow or ice migration by differential ablation, as proposed recently for the northern permanent cap and underlying NPLD. We have also obtained the, first direct estimate of the regional dip of the SPLD, around 2-3* outward (northward) in one area. ?? 2001 Elsevier Science.

  18. Mission hazard assessment for STARS Mission 1 (M1) in the Marshall Islands area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outka, D.E.; LaFarge, R.A.

    1993-07-01

    A mission hazard assessment has been performed for the Strategic Target System Mission 1 (known as STARS M1) for hazards due to potential debris impact in the Marshall Islands area. The work was performed at Sandia National Laboratories as a result of discussion with Kwajalein Missile Range (KMR) safety officers. The STARS M1 rocket will be launched from the Kauai Test Facility (KTF), Hawaii, and deliver two payloads to within the viewing range of sensors located on the Kwajalein Atoll. The purpose of this work has been to estimate upper bounds for expected casualty rates and impact probability or themore » Marshall Islands areas which adjoin the STARS M1 instantaneous impact point (IIP) trace. This report documents the methodology and results of the analysis.« less

  19. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  20. Consistent Tolerance Bounds for Statistical Distributions

    NASA Technical Reports Server (NTRS)

    Mezzacappa, M. A.

    1983-01-01

    Assumption that sample comes from population with particular distribution is made with confidence C if data lie between certain bounds. These "confidence bounds" depend on C and assumption about distribution of sampling errors around regression line. Graphical test criteria using tolerance bounds are applied in industry where statistical analysis influences product development and use. Applied to evaluate equipment life.

  1. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense.

    PubMed

    Nishikawa, C Y; Araújo, L M; Kadowaki, M A S; Monteiro, R A; Steffens, M B R; Pedrosa, F O; Souza, E M; Chubatsu, L S

    2012-02-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ(54) co-factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH(4)Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription.

  2. Expression and characterization of an N-truncated form of the NifA protein of Azospirillum brasilense

    PubMed Central

    Nishikawa, C.Y.; Araújo, L.M.; Kadowaki, M.A.S.; Monteiro, R.A.; Steffens, M.B.R.; Pedrosa, F.O.; Souza, E.M.; Chubatsu, L.S.

    2012-01-01

    Azospirillum brasilense is a nitrogen-fixing bacterium associated with important agricultural crops such as rice, wheat and maize. The expression of genes responsible for nitrogen fixation (nif genes) in this bacterium is dependent on the transcriptional activator NifA. This protein contains three structural domains: the N-terminal domain is responsible for the negative control by fixed nitrogen; the central domain interacts with the RNA polymerase σ54 factor and the C-terminal domain is involved in DNA binding. The central and C-terminal domains are linked by the interdomain linker (IDL). A conserved four-cysteine motif encompassing the end of the central domain and the IDL is probably involved in the oxygen-sensitivity of NifA. In the present study, we have expressed, purified and characterized an N-truncated form of A. brasilense NifA. The protein expression was carried out in Escherichia coli and the N-truncated NifA protein was purified by chromatography using an affinity metal-chelating resin followed by a heparin-bound resin. Protein homogeneity was determined by densitometric analysis. The N-truncated protein activated in vivo nifH::lacZ transcription regardless of fixed nitrogen concentration (absence or presence of 20 mM NH4Cl) but only under low oxygen levels. On the other hand, the aerobically purified N-truncated NifA protein bound to the nifB promoter, as demonstrated by an electrophoretic mobility shift assay, implying that DNA-binding activity is not strictly controlled by oxygen levels. Our data show that, while the N-truncated NifA is inactive in vivo under aerobic conditions, it still retains DNA-binding activity, suggesting that the oxidized form of NifA bound to DNA is not competent to activate transcription. PMID:22267004

  3. Quantum Bayesian networks with application to games displaying Parrondo's paradox

    NASA Astrophysics Data System (ADS)

    Pejic, Michael

    Bayesian networks and their accompanying graphical models are widely used for prediction and analysis across many disciplines. We will reformulate these in terms of linear maps. This reformulation will suggest a natural extension, which we will show is equivalent to standard textbook quantum mechanics. Therefore, this extension will be termed quantum. However, the term quantum should not be taken to imply this extension is necessarily only of utility in situations traditionally thought of as in the domain of quantum mechanics. In principle, it may be employed in any modelling situation, say forecasting the weather or the stock market---it is up to experiment to determine if this extension is useful in practice. Even restricting to the domain of quantum mechanics, with this new formulation the advantages of Bayesian networks can be maintained for models incorporating quantum and mixed classical-quantum behavior. The use of these will be illustrated by various basic examples. Parrondo's paradox refers to the situation where two, multi-round games with a fixed winning criteria, both with probability greater than one-half for one player to win, are combined. Using a possibly biased coin to determine the rule to employ for each round, paradoxically, the previously losing player now wins the combined game with probabilitygreater than one-half. Using the extended Bayesian networks, we will formulate and analyze classical observed, classical hidden, and quantum versions of a game that displays this paradox, finding bounds for the discrepancy from naive expectations for the occurrence of the paradox. A quantum paradox inspired by Parrondo's paradox will also be analyzed. We will prove a bound for the discrepancy from naive expectations for this paradox as well. Games involving quantum walks that achieve this bound will be presented.

  4. NK1 receptor fused to beta-arrestin displays a single-component, high-affinity molecular phenotype.

    PubMed

    Martini, Lene; Hastrup, Hanne; Holst, Birgitte; Fraile-Ramos, Alberto; Marsh, Mark; Schwartz, Thue W

    2002-07-01

    Arrestins are cytosolic proteins that, upon stimulation of seven transmembrane (7TM) receptors, terminate signaling by binding to the receptor, displacing the G protein and targeting the receptor to clathrin-coated pits. Fusion of beta-arrestin1 to the C-terminal end of the neurokinin NK1 receptor resulted in a chimeric protein that was expressed to some extent on the cell surface but also accumulated in transferrin-labeled recycling endosomes independently of agonist stimulation. As expected, the fusion protein was almost totally silenced with respect to agonist-induced signaling through the normal Gq/G11 and Gs pathways. The NK1-beta-arrestin1 fusion construct bound nonpeptide antagonists with increased affinity but surprisingly also bound two types of agonists, substance P and neurokinin A, with high, normal affinity. In the wild-type NK1 receptor, neurokinin A (NKA) competes for binding against substance P and especially against antagonists with up to 1000-fold lower apparent affinity than determined in functional assays and in homologous binding assays. When the NK1 receptor was closely fused to G proteins, this phenomenon was eliminated among agonists, but the agonists still competed with low affinity against antagonists. In contrast, in the NK1-beta-arrestin1 fusion protein, all ligands bound with similar affinity independent of the choice of radioligand and with Hill coefficients near unity. We conclude that the NK1 receptor in complex with arrestin is in a high-affinity, stable, agonist-binding form probably best suited to structural analysis and that the receptor can display binding properties that are nearly theoretically ideal when it is forced to complex with only a single intracellular protein partner.

  5. Structural geology of the proposed site area for a high-level radioactive waste repository, Yucca Mountain, Nevada

    USGS Publications Warehouse

    Potter, C.J.; Day, W.C.; Sweetkind, D.S.; Dickerson, R.P.

    2004-01-01

    Geologic mapping and fracture studies have documented the fundamental patterns of joints and faults in the thick sequence of rhyolite tuffs at Yucca Mountain, Nevada, the proposed site of an underground repository for high-level radioactive waste. The largest structures are north-striking, block-bounding normal faults (with a subordinate left-lateral component) that divide the mountain into numerous 1-4-km-wide panels of gently east-dipping strata. Block-bounding faults, which underwent Quaternary movement as well as earlier Neogene movement, are linked by dominantly northwest-striking relay faults, especially in the more extended southern part of Yucca Mountain. Intrablock faults are commonly short and discontinuous, except those on the more intensely deformed margins of the blocks. Lithologic properties of the local tuff stratigraphy strongly control the mesoscale fracture network, and locally the fracture network has a strong influence on the nature of intrablock faulting. The least faulted part of Yucca Mountain is the north-central part, the site of the proposed repository. Although bounded by complex normal-fault systems, the 4-km-wide central block contains only sparse intrablock faults. Locally intense jointing appears to be strata-bound. The complexity of deformation and the magnitude of extension increase in all directions away from the proposed repository volume, especially in the southern part of the mountain where the intensity of deformation and the amount of vertical-axis rotation increase markedly. Block-bounding faults were active at Yucca Mountain during and after eruption of the 12.8-12.7 Ma Paintbrush Group, and significant motion on these faults postdated the 11.6 Ma Rainier Mesa Tuff. Diminished fault activity continued into Quaternary time. Roughly half of the stratal tilting in the site area occurred after 11.6 Ma, probably synchronous with the main pulse of vertical-axis rotation, which occurred between 11.6 and 11.45 Ma. Studies of sequential formation of tectonic joints, in the context of regional paleostress studies, indicate that north- and northwest-striking joint sets formed coevally with the main faulting episode during regional east-northeast-west-southwest extension and that a prominent northeast-striking joint set formed later, probably after 9 Ma. These structural analyses contribute to the understanding of several important issues at Yucca Mountain, including potential hydrologic pathways, seismic hazards, and fault-displacement hazards. ?? 2004 Geological Society of America.

  6. Inequalities between Kappa and Kappa-Like Statistics for "k x k" Tables

    ERIC Educational Resources Information Center

    Warrens, Matthijs J.

    2010-01-01

    The paper presents inequalities between four descriptive statistics that can be expressed in the form [P-E(P)]/[1-E(P)], where P is the observed proportion of agreement of a "kappa x kappa" table with identical categories, and E(P) is a function of the marginal probabilities. Scott's "pi" is an upper bound of Goodman and Kruskal's "lambda" and a…

  7. Sine-gordon type field in spacetime of arbitrary dimension. II: Stochastic quantization

    NASA Astrophysics Data System (ADS)

    Kirillov, A. I.

    1995-11-01

    Using the theory of Dirichlet forms, we prove the existence of a distribution-valued diffusion process such that the Nelson measure of a field with a bounded interaction density is its invariant probability measure. A Langevin equation in mathematically correct form is formulated which is satisfied by the process. The drift term of the equation is interpreted as a renormalized Euclidean current operator.

  8. Using Patterns of Summed Scores in Paper-and-Pencil Tests and Computer-Adaptive Tests to Detect Misfitting Item Score Patterns

    ERIC Educational Resources Information Center

    Meijer, Rob R.

    2004-01-01

    Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…

  9. Low Complexity Track Initialization and Fusion for Multi-Modal Sensor Networks

    DTIC Science & Technology

    2012-11-08

    feature was demonstrated via the simulations. Aerospace 2011work further documents our investigation of multiple target tracking filters in...bounds that determine how well a sensor network can resolve and localize multiple targets as a function of the operating parameters such as sensor...probability density (PHD) filter for binary measurements using proximity sensors. 15. SUBJECT TERMS proximity sensors, PHD filter, multiple

  10. QMRA for Drinking Water: 2. The Effect of Pathogen Clustering in Single-Hit Dose-Response Models.

    PubMed

    Nilsen, Vegard; Wyller, John

    2016-01-01

    Spatial and/or temporal clustering of pathogens will invalidate the commonly used assumption of Poisson-distributed pathogen counts (doses) in quantitative microbial risk assessment. In this work, the theoretically predicted effect of spatial clustering in conventional "single-hit" dose-response models is investigated by employing the stuttering Poisson distribution, a very general family of count distributions that naturally models pathogen clustering and contains the Poisson and negative binomial distributions as special cases. The analysis is facilitated by formulating the dose-response models in terms of probability generating functions. It is shown formally that the theoretical single-hit risk obtained with a stuttering Poisson distribution is lower than that obtained with a Poisson distribution, assuming identical mean doses. A similar result holds for mixed Poisson distributions. Numerical examples indicate that the theoretical single-hit risk is fairly insensitive to moderate clustering, though the effect tends to be more pronounced for low mean doses. Furthermore, using Jensen's inequality, an upper bound on risk is derived that tends to better approximate the exact theoretical single-hit risk for highly overdispersed dose distributions. The bound holds with any dose distribution (characterized by its mean and zero inflation index) and any conditional dose-response model that is concave in the dose variable. Its application is exemplified with published data from Norovirus feeding trials, for which some of the administered doses were prepared from an inoculum of aggregated viruses. The potential implications of clustering for dose-response assessment as well as practical risk characterization are discussed. © 2016 Society for Risk Analysis.

  11. Monoamines and assessment of risks.

    PubMed

    Takahashi, Hidehiko

    2012-12-01

    Over the past decade, neuroeconomics studies utilizing neurophysiology methods (fMRI or EEG) have flourished, revealing the neural basis of 'boundedly rational' or 'irrational' decision-making that violates normative theory. The next question is how modulatory neurotransmission is involved in these central processes. Here I focused on recent efforts to understand how central monoamine transmission is related to nonlinear probability weighting and loss aversion, central features of prospect theory, which is a leading alternative to normative theory for decision-making under risk. Circumstantial evidence suggests that dopamine tone might be related to distortion of subjective reward probability and noradrenaline and serotonin tone might influence aversive emotional reaction to potential loss. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Production of Aflatoxin on Soybeans

    PubMed Central

    Gupta, S. K.; Venkitasubramanian, T. A.

    1975-01-01

    Probable factors influencing resistance to aflatoxin synthesis in soybeans have been investigated by using cultures of Aspergillus parasiticus NRRL 3240. Soybeans contain a small amount of zinc (0.01 μg/g) bound to phytic acid. Autoclaving soybeans at 15 pounds (6803.88 g) for 15 min increases the aflatoxin production, probably by making zinc available. Addition of zinc to both autoclaved and nonautoclaved soybeans promotes aflatoxin production. However, addition of varying levels of phytic acid at a constant concentration of zinc depresses aflatoxin synthesis with an increase in the added phytic acid. In a synthetic medium known to give good yields of aflatoxin, the addition of phytic acid (10 mM) decreases aflatoxin synthesis. PMID:1171654

  13. Paucity of attractors in nonlinear systems driven with complex signals.

    PubMed

    Pethel, Shawn D; Blakely, Jonathan N

    2011-04-01

    We study the probability of multistability in a quadratic map driven repeatedly by a random signal of length N, where N is taken as a measure of the signal complexity. We first establish analytically that the number of coexisting attractors is bounded above by N. We then numerically estimate the probability p of a randomly chosen signal resulting in a multistable response as a function of N. Interestingly, with increasing drive signal complexity the system exhibits a paucity of attractors. That is, almost any drive signal beyond a certain complexity level will result in a single attractor response (p=0). This mechanism may play a role in allowing sensitive multistable systems to respond consistently to external influences.

  14. Generalized Probabilistic Description of Noninteracting Identical Particles

    NASA Astrophysics Data System (ADS)

    Karczewski, Marcin; Markiewicz, Marcin; Kaszlikowski, Dagomir; Kurzyński, Paweł

    2018-02-01

    We investigate an operational description of identical noninteracting particles in multiports. In particular, we look for physically motivated restrictions that explain their bunching probabilities. We focus on a symmetric 3-port in which a triple of superquantum particles admitted by our generalized probabilistic framework would bunch with a probability of 3/4 . The bosonic bound of 2/3 can then be restored by imposing the additional requirement of product evolution of certain input states. These states are characterized by the fact that, much like product states, their entropy equals the sum of entropies of their one-particle substates. This principle is, however, not enough to exclude the possibility of superquantum particles in higher-order multiports.

  15. A New Empirical Constraint on the Prevalence of Technological Species in the Universe

    NASA Astrophysics Data System (ADS)

    Frank, A.; Sullivan, W. T., III

    2016-05-01

    In this article, we address the cosmic frequency of technological species. Recent advances in exoplanet studies provide strong constraints on all astrophysical terms in the Drake equation. Using these and modifying the form and intent of the Drake equation, we set a firm lower bound on the probability that one or more technological species have evolved anywhere and at any time in the history of the observable Universe. We find that as long as the probability that a habitable zone planet develops a technological species is larger than ˜10-24, humanity is not the only time technological intelligence has evolved. This constraint has important scientific and philosophical consequences.

  16. A Looping-Based Model for Quenching Repression

    PubMed Central

    Pollak, Yaroslav; Goldberg, Sarah; Amit, Roee

    2017-01-01

    We model the regulatory role of proteins bound to looped DNA using a simulation in which dsDNA is represented as a self-avoiding chain, and proteins as spherical protrusions. We simulate long self-avoiding chains using a sequential importance sampling Monte-Carlo algorithm, and compute the probabilities for chain looping with and without a protrusion. We find that a protrusion near one of the chain’s termini reduces the probability of looping, even for chains much longer than the protrusion–chain-terminus distance. This effect increases with protrusion size, and decreases with protrusion-terminus distance. The reduced probability of looping can be explained via an eclipse-like model, which provides a novel inhibitory mechanism. We test the eclipse model on two possible transcription-factor occupancy states of the D. melanogaster eve 3/7 enhancer, and show that it provides a possible explanation for the experimentally-observed eve stripe 3 and 7 expression patterns. PMID:28085884

  17. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  18. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  19. The Sparta Fault, Southern Greece: From segmentation and tectonic geomorphology to seismic hazard mapping and time dependent probabilities

    NASA Astrophysics Data System (ADS)

    Papanikolaοu, Ioannis D.; Roberts, Gerald P.; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel

    2013-06-01

    The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and post-glacial throw, emphasising how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn < 83) and central parts (121 < ksn < 138) of the Sparta Fault along strike the fault system. Based on fault throw-rates and the bedrock geology a seismic hazard map has been constructed that extracts a locality specific long-term earthquake recurrence record. Based on this map the town of Sparta would experience a destructive event similar to that in 464 B.C. approximately every 1792 ± 458 years. Since no other major earthquake M ~ 7.0 has been generated by this system since 464 B.C., a future event could be imminent. As a result, not only time-independent but also time-dependent probabilities, which incorporate the concept of the seismic cycle, have been calculated for the town of Sparta, showing a considerably higher time-dependent probability of 3.0 ± 1.5% over the next 30 years compared to the time-independent probability of 1.66%. Half of the hanging wall area of the Sparta Fault can experience intensities ≥ IX, but belongs to the lowest category of seismic risk of the national seismic building code. On view of these relatively high calculated probabilities, a reassessment of the building code might be necessary.

  20. The Sparta Fault, Southern Greece: From Segmentation and Tectonic Geomorphology to Seismic Hazard Mapping and Time Dependent Probabilities

    NASA Astrophysics Data System (ADS)

    Papanikolaou, Ioannis; Roberts, Gerald; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel

    2013-04-01

    The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2.407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and postglacial throw, emphasizing how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn<83) and central parts (121

  1. Cost analysis of measles in refugees arriving at Los Angeles International Airport from Malaysia

    PubMed Central

    Coleman, Margaret S.; Burke, Heather M.; Welstead, Bethany L.; Mitchell, Tarissa; Taylor, Eboni M.; Shapovalov, Dmitry; Maskery, Brian A.; Joo, Heesoo; Weinberg, Michelle

    2017-01-01

    ABSTRACT Background On August 24, 2011, 31 US-bound refugees from Kuala Lumpur, Malaysia (KL) arrived in Los Angeles. One of them was diagnosed with measles post-arrival. He exposed others during a flight, and persons in the community while disembarking and seeking medical care. As a result, 9 cases of measles were identified. Methods We estimated costs of response to this outbreak and conducted a comparative cost analysis examining what might have happened had all US-bound refugees been vaccinated before leaving Malaysia. Results State-by-state costs differed and variously included vaccination, hospitalization, medical visits, and contact tracing with costs ranging from $621 to $35,115. The total of domestic and IOM Malaysia reported costs for US-bound refugees were $137,505 [range: $134,531 - $142,777 from a sensitivity analysis]. Had all US-bound refugees been vaccinated while in Malaysia, it would have cost approximately $19,646 and could have prevented 8 measles cases. Conclusion A vaccination program for US-bound refugees, supporting a complete vaccination for US-bound refugees, could improve refugees' health, reduce importations of vaccine-preventable diseases in the United States, and avert measles response activities and costs. PMID:28068211

  2. Cost analysis of measles in refugees arriving at Los Angeles International Airport from Malaysia.

    PubMed

    Coleman, Margaret S; Burke, Heather M; Welstead, Bethany L; Mitchell, Tarissa; Taylor, Eboni M; Shapovalov, Dmitry; Maskery, Brian A; Joo, Heesoo; Weinberg, Michelle

    2017-05-04

    Background On August 24, 2011, 31 US-bound refugees from Kuala Lumpur, Malaysia (KL) arrived in Los Angeles. One of them was diagnosed with measles post-arrival. He exposed others during a flight, and persons in the community while disembarking and seeking medical care. As a result, 9 cases of measles were identified. Methods We estimated costs of response to this outbreak and conducted a comparative cost analysis examining what might have happened had all US-bound refugees been vaccinated before leaving Malaysia. Results State-by-state costs differed and variously included vaccination, hospitalization, medical visits, and contact tracing with costs ranging from $621 to $35,115. The total of domestic and IOM Malaysia reported costs for US-bound refugees were $137,505 [range: $134,531 - $142,777 from a sensitivity analysis]. Had all US-bound refugees been vaccinated while in Malaysia, it would have cost approximately $19,646 and could have prevented 8 measles cases. Conclusion A vaccination program for US-bound refugees, supporting a complete vaccination for US-bound refugees, could improve refugees' health, reduce importations of vaccine-preventable diseases in the United States, and avert measles response activities and costs.

  3. The interplay of intrinsic and extrinsic bounded noises in biomolecular networks.

    PubMed

    Caravagna, Giulio; Mauri, Giancarlo; d'Onofrio, Alberto

    2013-01-01

    After being considered as a nuisance to be filtered out, it became recently clear that biochemical noise plays a complex role, often fully functional, for a biomolecular network. The influence of intrinsic and extrinsic noises on biomolecular networks has intensively been investigated in last ten years, though contributions on the co-presence of both are sparse. Extrinsic noise is usually modeled as an unbounded white or colored gaussian stochastic process, even though realistic stochastic perturbations are clearly bounded. In this paper we consider Gillespie-like stochastic models of nonlinear networks, i.e. the intrinsic noise, where the model jump rates are affected by colored bounded extrinsic noises synthesized by a suitable biochemical state-dependent Langevin system. These systems are described by a master equation, and a simulation algorithm to analyze them is derived. This new modeling paradigm should enlarge the class of systems amenable at modeling. We investigated the influence of both amplitude and autocorrelation time of a extrinsic Sine-Wiener noise on: (i) the Michaelis-Menten approximation of noisy enzymatic reactions, which we show to be applicable also in co-presence of both intrinsic and extrinsic noise, (ii) a model of enzymatic futile cycle and (iii) a genetic toggle switch. In (ii) and (iii) we show that the presence of a bounded extrinsic noise induces qualitative modifications in the probability densities of the involved chemicals, where new modes emerge, thus suggesting the possible functional role of bounded noises.

  4. Probability density functions characterizing PSC particle size distribution parameters for NAT and STS derived from in situ measurements between 1989 and 2010 above McMurdo Station, Antarctica, and between 1991-2004 above Kiruna, Sweden

    NASA Astrophysics Data System (ADS)

    Deshler, Terry

    2016-04-01

    Balloon-borne optical particle counters were used to make in situ size resolved particle concentration measurements within polar stratospheric clouds (PSCs) over 20 years in the Antarctic and over 10 years in the Arctic. The measurements were made primarily during the late winter in the Antarctic and in the early and mid-winter in the Arctic. Measurements in early and mid-winter were also made during 5 years in the Antarctic. For the analysis bimodal lognormal size distributions are fit to 250 meter averages of the particle concentration data. The characteristics of these fits, along with temperature, water and nitric acid vapor mixing ratios, are used to classify the PSC observations as either NAT, STS, ice, or some mixture of these. The vapor mixing ratios are obtained from satellite when possible, otherwise assumptions are made. This classification of the data is used to construct probability density functions for NAT, STS, and ice number concentration, median radius and distribution width for mid and late winter clouds in the Antarctic and for early and mid-winter clouds in the Arctic. Additional analysis is focused on characterizing the temperature histories associated with the particle classes and the different time periods. The results from theses analyses will be presented, and should be useful to set bounds for retrievals of PSC properties from remote measurements, and to constrain model representations of PSCs.

  5. Discovery of wide low and very low-mass binary systems using Virtual Observatory tools

    NASA Astrophysics Data System (ADS)

    Gálvez-Ortiz, M. C.; Solano, E.; Lodieu, N.; Aberasturi, M.

    2017-04-01

    The frequency of multiple systems and their properties are key constraints of stellar formation and evolution. Formation mechanisms of very low-mass (VLM) objects are still under considerable debate, and an accurate assessment of their multiplicity and orbital properties is essential for constraining current theoretical models. Taking advantage of the virtual observatory capabilities, we looked for comoving low and VLM binary (or multiple) systems using the Large Area Survey of the UKIDSS LAS DR10, SDSS DR9 and the 2MASS Catalogues. Other catalogues (WISE, GLIMPSE, SuperCosmos, etc.) were used to derive the physical parameters of the systems. We report the identification of 36 low and VLM (˜M0-L0 spectral types) candidates to binary/multiple system (separations between 200 and 92 000 au), whose physical association is confirmed through common proper motion, distance and low probability of chance alignment. This new system list notably increases the previous sampling in their mass-separation parameter space (˜100). We have also found 50 low-mass objects that we can classify as ˜L0-T2 according to their photometric information. Only one of these objects presents a common proper motion high-mass companion. Although we could not constrain the age of the majority of the candidates, probably most of them are still bound except four that may be under disruption processes. We suggest that our sample could be divided in two populations: one tightly bound wide VLM systems that are expected to last more than 10 Gyr, and other formed by weak bound wide VLM systems that will dissipate within a few Gyr.

  6. Anti-amyloid precursor protein antibodies inhibit amyloid-β production by steric hindrance

    PubMed Central

    Thomas, Rhian S.; Liddell, J. Eryl; Kidd, Emma J.

    2015-01-01

    Summary Cleavage of amyloid precursor protein (APP) by β- and γ-secretases results in the production of amyloid-β (Aβ) in Alzheimer’s disease (AD). We raised two monoclonal antibodies, 2B3 and 2B12, that recognise the β-secretase cleavage site on APP but not Aβ. We hypothesised that these antibodies would reduce Aβ levels via steric hindrance of β-secretase. Both antibodies decreased extracellular Aβ levels from astrocytoma cells, but 2B3 was more potent than 2B12. Levels of soluble sAPPα from the non-amyloidogenic α-secretase pathway and intracellular APP were not affected by either antibody nor were there any effects on cell viability. 2B3 exhibited a higher affinity for APP than 2B12 and its epitope appeared to span the cleavage site while 2B12 bound slightly upstream. Both of these factors probably contribute to its greater effect on Aβ levels. After 60 minutes incubation at pH 4.0, most 2B3 and 2B12 remained bound to their antigen, suggesting that the antibodies will remain bound to APP in the acidic endosomes where β-secretase cleavage probably occurs. Only 2B3 and 2B12, but not control antibodies, inhibited the cleavage of sAPPα by β-secretase in a cell-free assay where effects of antibody internalisation and intracellular degradation were excluded. 2B3 virtually abolished this cleavage. In addition, levels of C-terminal APP fragments, βCTF, generated following β-secretase cleavage, were significantly reduced in cells after incubation with 2B3. These results strongly suggest that anti-cleavage site antibodies can generically reduce Aβ levels via inhibition of β-secretase by steric hindrance and may provide a novel alternative therapy for AD. PMID:21122073

  7. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  8. Necessary and sufficient criterion for extremal quantum correlations in the simplest Bell scenario

    NASA Astrophysics Data System (ADS)

    Ishizaka, Satoshi

    2018-05-01

    In the study of quantum nonlocality, one obstacle is that the analytical criterion for identifying the boundaries between quantum and postquantum correlations has not yet been given, even in the simplest Bell scenario. We propose a plausible, analytical, necessary and sufficient condition ensuring that a nonlocal quantum correlation in the simplest scenario is an extremal boundary point. Our extremality condition amounts to certifying an information-theoretical quantity; the probability of guessing a measurement outcome of a distant party optimized using any quantum instrument. We show that this quantity can be upper and lower bounded from any correlation in a device-independent way, and we use numerical calculations to confirm that coincidence of the upper and lower bounds appears to be necessary and sufficient for the extremality.

  9. Quantum speedup of Monte Carlo methods.

    PubMed

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  10. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  11. The 1/ N Expansion of Tensor Models Beyond Perturbation Theory

    NASA Astrophysics Data System (ADS)

    Gurau, Razvan

    2014-09-01

    We analyze in full mathematical rigor the most general quartically perturbed invariant probability measure for a random tensor. Using a version of the Loop Vertex Expansion (which we call the mixed expansion) we show that the cumulants write as explicit series in 1/ N plus bounded rest terms. The mixed expansion recasts the problem of determining the subleading corrections in 1/ N into a simple combinatorial problem of counting trees decorated by a finite number of loop edges. As an aside, we use the mixed expansion to show that the (divergent) perturbative expansion of the tensor models is Borel summable and to prove that the cumulants respect an uniform scaling bound. In particular the quartically perturbed measures fall, in the N→ ∞ limit, in the universality class of Gaussian tensor models.

  12. Mathematics of gravitational lensing: multiple imaging and magnification

    NASA Astrophysics Data System (ADS)

    Petters, A. O.; Werner, M. C.

    2010-09-01

    The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.

  13. First flavor-tagged determination of bounds on mixing-induced CP violation in Bs0 --> J/psiphi decays.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'Orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Labarga, L; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-04-25

    This Letter describes the first determination of bounds on the CP-violation parameter 2beta(s) using B(s)(0) decays in which the flavor of the bottom meson at production is identified. The result is based on approximately 2000 B(s)(0)-->J/psiphi decays reconstructed in a 1.35 fb(-1) data sample collected with the CDF II detector using pp collisions produced at the Fermilab Tevatron. We report confidence regions in the two-dimensional space of 2beta(s) and the decay-width difference DeltaGamma. Assuming the standard model predictions of 2beta(s) and DeltaGamma, the probability of a deviation as large as the level of the observed data is 15%, corresponding to 1.5 Gaussian standard deviations.

  14. Matter scattering in quadratic gravity and unitarity

    NASA Astrophysics Data System (ADS)

    Abe, Yugo; Inami, Takeo; Izumi, Keisuke; Kitamura, Tomotaka

    2018-03-01

    We investigate the ultraviolet (UV) behavior of two-scalar elastic scattering with graviton exchanges in higher-curvature gravity theory. In Einstein gravity, matter scattering is shown not to satisfy the unitarity bound at tree level at high energy. Among some of the possible directions for the UV completion of Einstein gravity, such as string theory, modified gravity, and inclusion of high-mass/high-spin states, we take R_{μν}^2 gravity coupled to matter. We show that matter scattering with graviton interactions satisfies the unitarity bound at high energy, even with negative norm states due to the higher-order derivatives of metric components. The difference in the unitarity property of these two gravity theories is probably connected to that in another UV property, namely, the renormalizability property of the two.

  15. Quantum speedup of Monte Carlo methods

    PubMed Central

    Montanaro, Ashley

    2015-01-01

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently. PMID:26528079

  16. Magnesium and Calcium in Isolated Cell Nuclei

    PubMed Central

    Naora, H.; Naora, H.; Mirsky, A. E.; Allfrey, V. G.

    1961-01-01

    The calcium and magnesium contents of thymus nuclei have been determined and the nuclear sites of attachment of these two elements have been studied. The nuclei used for these purposes were isolated in non-aqueous media and in sucrose solutions. Non-aqueous nuclei contain 0.024 per cent calcium and 0.115 per cent magnesium. Calcium and magnesium are held at different sites. The greater part of the magnesium is bound to DNA, probably to its phosphate groups. Evidence is presented that the magnesium atoms combined with the phosphate groups of DNA are also attached to mononucleotides. There is reason to believe that those DNA-phosphate groups to which magnesium is bound, less than 1/10th of the total, are metabolically active, while those to which histones are attached seem to be inactive. PMID:13727745

  17. The dynamics of superclusters - Initial determination of the mass density of the universe at large scales

    NASA Technical Reports Server (NTRS)

    Ford, H. C.; Ciardullo, R.; Harms, R. J.; Bartko, F.

    1981-01-01

    The radial velocities of cluster members of two rich, large superclusters have been measured in order to probe the supercluster mass densities, and simple evolutionary models have been computed to place limits upon the mass density within each supercluster. These superclusters represent true physical associations of size of about 100 Mpc seen presently at an early stage of evolution. One supercluster is weakly bound, the other probably barely bound, but possibly marginally unbound. Gravity has noticeably slowed the Hubble expansion of both superclusters. Galaxy surface-density counts and the density enhancement of Abell clusters within each supercluster were used to derive the ratio of mass densities of the superclusters to the mean field mass density. The results strongly exclude a closed universe.

  18. A monogamy-of-entanglement game with applications to device-independent quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie

    2013-10-01

    We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.

  19. Probabilistic Reasoning for Robustness in Automated Planning

    NASA Technical Reports Server (NTRS)

    Schaffer, Steven; Clement, Bradley; Chien, Steve

    2007-01-01

    A general-purpose computer program for planning the actions of a spacecraft or other complex system has been augmented by incorporating a subprogram that reasons about uncertainties in such continuous variables as times taken to perform tasks and amounts of resources to be consumed. This subprogram computes parametric probability distributions for time and resource variables on the basis of user-supplied models of actions and resources that they consume. The current system accepts bounded Gaussian distributions over action duration and resource use. The distributions are then combined during planning to determine the net probability distribution of each resource at any time point. In addition to a full combinatoric approach, several approximations for arriving at these combined distributions are available, including maximum-likelihood and pessimistic algorithms. Each such probability distribution can then be integrated to obtain a probability that execution of the plan under consideration would violate any constraints on the resource. The key idea is to use these probabilities of conflict to score potential plans and drive a search toward planning low-risk actions. An output plan provides a balance between the user s specified averseness to risk and other measures of optimality.

  20. Systems biology and the origins of life? part II. Are biochemical networks possible ancestors of living systems? networks of catalysed chemical reactions: non-equilibrium, self-organization and evolution.

    PubMed

    Ricard, Jacques

    2010-01-01

    The present article discusses the possibility that catalysed chemical networks can evolve. Even simple enzyme-catalysed chemical reactions can display this property. The example studied is that of a two-substrate proteinoid, or enzyme, reaction displaying random binding of its substrates A and B. The fundamental property of such a system is to display either emergence or integration depending on the respective values of the probabilities that the enzyme has bound one of its substrate regardless it has bound the other substrate, or, specifically, after it has bound the other substrate. There is emergence of information if p(A)>p(AB) and p(B)>p(BA). Conversely, if p(A)

  1. Interactions between macromolecule-bound antioxidants and Trolox during liposome autoxidation: A multivariate approach.

    PubMed

    Çelik, Ecem Evrim; Rubio, Jose Manuel Amigo; Andersen, Mogens L; Gökmen, Vural

    2017-12-15

    The interactions between free and macromolecule-bound antioxidants were investigated in order to evaluate their combined effects on the antioxidant environment. Dietary fiber (DF), protein and lipid-bound antioxidants, obtained from whole wheat, soybean and olive oil products, respectively and Trolox were used for this purpose. Experimental studies were carried out in autoxidizing liposome medium by monitoring the development of fluorescent products formed by lipid oxidation. Chemometric methods were used both at experimental design and multivariate data analysis stages. Comparison of the simple addition effects of Trolox and bound antioxidants with measured values on lipid oxidation revealed synergetic interactions for DF and refined olive oil-bound antioxidants, and antagonistic interactions for protein and extra virgin olive oil-bound antioxidants with Trolox. A generalized version of logistic function was successfully used for modelling the oxidation curve of liposomes. Principal component analysis revealed two separate phases of liposome autoxidation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Ensemble-based characterization of unbound and bound states on protein energy landscape

    PubMed Central

    Ruvinsky, Anatoly M; Kirys, Tatsiana; Tuzikov, Alexander V; Vakser, Ilya A

    2013-01-01

    Physicochemical description of numerous cell processes is fundamentally based on the energy landscapes of protein molecules involved. Although the whole energy landscape is difficult to reconstruct, increased attention to particular targets has provided enough structures for mapping functionally important subspaces associated with the unbound and bound protein structures. The subspace mapping produces a discrete representation of the landscape, further called energy spectrum. We compiled and characterized ensembles of bound and unbound conformations of six small proteins and explored their spectra in implicit solvent. First, the analysis of the unbound-to-bound changes points to conformational selection as the binding mechanism for four proteins. Second, results show that bound and unbound spectra often significantly overlap. Moreover, the larger the overlap the smaller the root mean square deviation (RMSD) between the bound and unbound conformational ensembles. Third, the center of the unbound spectrum has a higher energy than the center of the corresponding bound spectrum of the dimeric and multimeric states for most of the proteins. This suggests that the unbound states often have larger entropy than the bound states. Fourth, the exhaustively long minimization, making small intrarotamer adjustments (all-atom RMSD ≤ 0.7 Å), dramatically reduces the distance between the centers of the bound and unbound spectra as well as the spectra extent. It condenses unbound and bound energy levels into a thin layer at the bottom of the energy landscape with the energy spacing that varies between 0.8–4.6 and 3.5–10.5 kcal/mol for the unbound and bound states correspondingly. Finally, the analysis of protein energy fluctuations showed that protein vibrations itself can excite the interstate transitions, including the unbound-to-bound ones. PMID:23526684

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Toomey, Bridget

    Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less

  4. Su-Schrieffer-Heeger chain with one pair of [Formula: see text]-symmetric defects.

    PubMed

    Jin, L; Wang, P; Song, Z

    2017-07-19

    The topologically nontrivial edge states induce [Formula: see text] transition in Su-Schrieffer-Heeger (SSH) chain with one pair of gain and loss at boundaries. In this study, we investigated a pair of [Formula: see text]-symmetric defects located inside the SSH chain, in particular, the defects locations are at the chain centre. The [Formula: see text] symmetry breaking of the bound states leads to the [Formula: see text] transition, the [Formula: see text]-symmetric phases and the localized states were studied. In the broken [Formula: see text]-symmetric phase, all energy levels break simultaneously in topologically trivial phase; however, two edge states in topologically nontrivial phase are free from the influence of the [Formula: see text]-symmetric defects. We discovered [Formula: see text]-symmetric bound states induced by the [Formula: see text]-symmetric local defects at the SSH chain centre. The [Formula: see text]-symmetric bound states significantly increase the [Formula: see text] transition threshold and coalesce to the topologically protected zero mode with vanishing probabilities on every other site of the left-half chain and the right-half chain, respectively.

  5. Alder Establishment and Channel Dynamics in a Tributary of the South Fork Eel River, Mendocino County, California

    Treesearch

    William J. Trush; Edward C. Connor; Knight Alan W.

    1989-01-01

    Riparian communities established along Elder Creek, a tributary of the upper South Fork Eel River, are bounded by two frequencies of periodic flooding. The upper limit for the riparian zone occurs at bankfull stage. The lower riparian limit is associated with a more frequent stage height, called the active channel, having an exceedance probability of 11 percent on a...

  6. Bambus[6]uril as a novel macrocyclic receptor for the nitrate anion.

    PubMed

    Toman, Petr; Makrlík, Emanuel; Vanura, Petr

    2013-01-01

    By using quantum mechanical DFT calculations, the most probable structure of the bambus[6]uril x NO3(-) anionic complex species was derived. In this complex having C3 symmetry, the nitrate anion NO3(-), included in the macrocyclic cavity, is bound by twelve weak hydrogen bonds between methine hydrogen atoms on the convex face of glycoluril units and the considered NO3(-) ion.

  7. How to Detect the Location and Time of a Covert Chemical Attack: A Bayesian Approach

    DTIC Science & Technology

    2009-12-01

    Inverse Problems, Design and Optimization Symposium 2004. Rio de Janeiro , Brazil. Chan, R., and Yee, E. (1997). A simple model for the probability...sensor interpretation applications and has been successfully applied, for example, to estimate the source strength of pollutant releases in multi...coagulation, and second-order pollutant diffusion in sorption- desorption, are not linear. Furthermore, wide uncertainty bounds exist for several of

  8. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  9. Network Design for Reliability and Resilience to Attack

    DTIC Science & Technology

    2014-03-01

    attacker can destroy n arcs in the network SPNI Shortest-Path Network-Interdiction problem TSP Traveling Salesman Problem UB upper bound UKR Ukraine...elimination from the traveling salesman problem (TSP). Literature calls a walk that does not contain a cycle a path [19]. The objective function in...arc lengths as random variables with known probability distributions. The m-median problem seeks to design a network with minimum average travel cost

  10. Probabilistically Bounded Staleness for Practical Partial Quorums

    DTIC Science & Technology

    2012-01-03

    probability of non-intersection be- tween any two quorums decreases. To the best of our knowledge , probabilistic quorums have only been used to study the...Practice In practice, many distributed data management systems use quo- rums as a replication mechanism. Amazon’s Dynamo [21] is the progenitor of a...Abbadi. Resilient logical structures for efficient management of replicated data. In VLDB 1992. [9] D. Agrawal and A. E. Abbadi. The tree quorum

  11. Tight bounds for the Pearle-Braunstein-Caves chained inequality without the fair-coincidence assumption

    NASA Astrophysics Data System (ADS)

    Jogenfors, Jonathan; Larsson, Jan-Åke

    2017-08-01

    In any Bell test, loopholes can cause issues in the interpretation of the results, since an apparent violation of the inequality may not correspond to a violation of local realism. An important example is the coincidence-time loophole that arises when detector settings might influence the time when detection will occur. This effect can be observed in many experiments where measurement outcomes are to be compared between remote stations because the interpretation of an ostensible Bell violation strongly depends on the method used to decide coincidence. The coincidence-time loophole has previously been studied for the Clauser-Horne-Shimony-Holt and Clauser-Horne inequalities, but recent experiments have shown the need for a generalization. Here, we study the generalized "chained" inequality by Pearle, Braunstein, and Caves (PBC) with N ≥2 settings per observer. This inequality has applications in, for instance, quantum key distribution where it has been used to reestablish security. In this paper we give the minimum coincidence probability for the PBC inequality for all N ≥2 and show that this bound is tight for a violation free of the fair-coincidence assumption. Thus, if an experiment has a coincidence probability exceeding the critical value derived here, the coincidence-time loophole is eliminated.

  12. A comparison of Probability Of Detection (POD) data determined using different statistical methods

    NASA Astrophysics Data System (ADS)

    Fahr, A.; Forsyth, D.; Bullock, M.

    1993-12-01

    Different statistical methods have been suggested for determining probability of detection (POD) data for nondestructive inspection (NDI) techniques. A comparative assessment of various methods of determining POD was conducted using results of three NDI methods obtained by inspecting actual aircraft engine compressor disks which contained service induced cracks. The study found that the POD and 95 percent confidence curves as a function of crack size as well as the 90/95 percent crack length vary depending on the statistical method used and the type of data. The distribution function as well as the parameter estimation procedure used for determining POD and the confidence bound must be included when referencing information such as the 90/95 percent crack length. The POD curves and confidence bounds determined using the range interval method are very dependent on information that is not from the inspection data. The maximum likelihood estimators (MLE) method does not require such information and the POD results are more reasonable. The log-logistic function appears to model POD of hit/miss data relatively well and is easy to implement. The log-normal distribution using MLE provides more realistic POD results and is the preferred method. Although it is more complicated and slower to calculate, it can be implemented on a common spreadsheet program.

  13. Singularity spectrum of intermittent seismic tremor at Kilauea Volcano, Hawaii

    USGS Publications Warehouse

    Shaw, H.R.; Chouet, B.

    1989-01-01

    Fractal singularity analysis (FSA) is used to study a 22-yr record of deep seismic tremor (30-60 km depth) for regions below Kilauea Volcano on the assumption that magma transport and fracture can be treated as a system of coupled nonlinear oscillators. Tremor episodes range from 1 to 100 min (cumulative duration = 1.60 ?? 104 min; yearly average - 727 min yr-1; mean gradient = 24.2 min yr-1km-1). Partitioning of probabilities, Pi, in the phase space of normalized durations, xi, are expressed in terms of a function f(??), where ?? is a variable exponent of a length scale, l. Plots of f(??) vs. ?? are called multifractal singularity spectra. The spectrum for deep tremor durations is bounded by ?? values of about 0.4 and 1.9 at f = O; fmax ???1.0 for ?? ??? 1. Results for tremor are similar to those found for systems transitional between complete mode locking and chaos. -Authors

  14. A large-aperture low-cost hydrophone array for tracking whales from small boats.

    PubMed

    Miller, B; Dawson, S

    2009-11-01

    A passive sonar array designed for tracking diving sperm whales in three dimensions from a single small vessel is presented, and the advantages and limitations of operating this array from a 6 m boat are described. The system consists of four free floating buoys, each with a hydrophone, built-in recorder, and global positioning system receiver (GPS), and one vertical stereo hydrophone array deployed from the boat. Array recordings are post-processed onshore to obtain diving profiles of vocalizing sperm whales. Recordings are synchronized using a GPS timing pulse recorded onto each track. Sensitivity analysis based on hyperbolic localization methods is used to obtain probability distributions for the whale's three-dimensional location for vocalizations received by at least four hydrophones. These localizations are compared to those obtained via isodiachronic sequential bound estimation. Results from deployment of the system around a sperm whale in the Kaikoura Canyon in New Zealand are shown.

  15. Modeling of the reactant conversion rate in a turbulent shear flow

    NASA Technical Reports Server (NTRS)

    Frankel, S. H.; Madnia, C. K.; Givi, P.

    1992-01-01

    Results are presented of direct numerical simulations (DNS) of spatially developing shear flows under the influence of infinitely fast chemical reactions of the type A + B yields Products. The simulation results are used to construct the compositional structure of the scalar field in a statistical manner. The results of this statistical analysis indicate that the use of a Beta density for the probability density function (PDF) of an appropriate Shvab-Zeldovich mixture fraction provides a very good estimate of the limiting bounds of the reactant conversion rate within the shear layer. This provides a strong justification for the implementation of this density in practical modeling of non-homogeneous turbulent reacting flows. However, the validity of the model cannot be generalized for predictions of higher order statistical quantities. A closed form analytical expression is presented for predicting the maximum rate of reactant conversion in non-homogeneous reacting turbulence.

  16. Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Bronski, Jared C.; DeVille, Lee; Jip Park, Moon

    2012-09-01

    We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.

  17. Continuous punishment and the potential of gentle rule enforcement.

    PubMed

    Erev, Ido; Ingram, Paul; Raz, Ornit; Shany, Dror

    2010-05-01

    The paper explores the conditions that determine the effect of rule enforcement policies that imply an attempt to punish all the visible violations of the rule. We start with a simple game-theoretic analysis that highlights the value of gentle COntinuous Punishment (gentle COP) policies. If the subjects of the rule are rational, gentle COP can eliminate violations even when the rule enforcer has limited resources. The second part of the paper uses simulations to examine the robustness of gentle COP policies to likely deviations from rationality. The results suggest that when the probability of detecting violations is sufficiently high, gentle COP policies can be effective even when the subjects of the rule are boundedly rational adaptive learners. The paper concludes with experimental studies that clarify the value of gentle COP policies in the lab, and in attempt to eliminate cheating in exams. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  18. A modern approach to superradiance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endlich, Solomon; Penco, Riccardo

    In this paper, we provide a simple and modern discussion of rotational super-radiance based on quantum field theory. We work with an effective theory valid at scales much larger than the size of the spinning object responsible for superradiance. Within this framework, the probability of absorption by an object at rest completely determines the superradiant amplification rate when that same object is spinning. We first discuss in detail superradiant scattering of spin 0 particles with orbital angular momentum ℓ = 1, and then extend our analysis to higher values of orbital angular momentum and spin. Along the way, we providemore » a simple derivation of vacuum friction — a ''quantum torque'' acting on spinning objects in empty space. Our results apply not only to black holes but to arbitrary spinning objects. We also discuss superradiant instability due to formation of bound states and, as an illustration, we calculate the instability rate Γ for bound states with massive spin 1 particles. For a black hole with mass M and angular velocity Ω, we find Γ ~ (GMμ) 7Ω when the particle’s Compton wavelength 1/μ is much greater than the size GM of the spinning object. This rate is parametrically much larger than the instability rate for spin 0 particles, which scales like (GM μ) 9Ω. This enhanced instability rate can be used to constrain the existence of ultralight particles beyond the Standard Model.« less

  19. Ant-inspired density estimation via random walks

    PubMed Central

    Musco, Cameron; Su, Hsin-Hao

    2017-01-01

    Many ant species use distributed population density estimation in applications ranging from quorum sensing, to task allocation, to appraisal of enemy colony strength. It has been shown that ants estimate local population density by tracking encounter rates: The higher the density, the more often the ants bump into each other. We study distributed density estimation from a theoretical perspective. We prove that a group of anonymous agents randomly walking on a grid are able to estimate their density within a small multiplicative error in few steps by measuring their rates of encounter with other agents. Despite dependencies inherent in the fact that nearby agents may collide repeatedly (and, worse, cannot recognize when this happens), our bound nearly matches what would be required to estimate density by independently sampling grid locations. From a biological perspective, our work helps shed light on how ants and other social insects can obtain relatively accurate density estimates via encounter rates. From a technical perspective, our analysis provides tools for understanding complex dependencies in the collision probabilities of multiple random walks. We bound the strength of these dependencies using local mixing properties of the underlying graph. Our results extend beyond the grid to more general graphs, and we discuss applications to size estimation for social networks, density estimation for robot swarms, and random walk-based sampling for sensor networks. PMID:28928146

  20. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  1. Geology of Pluto and Charon Overview

    NASA Astrophysics Data System (ADS)

    Moore, J. M.; Stern, A.; Weaver, H. A., Jr.; Young, L. A.; Ennico Smith, K.; Olkin, C.

    2015-12-01

    Pluto's surface was found to be remarkably diverse in terms of its range of landforms, terrain ages, and inferred geological processes. There is a latitudinal zonation of albedo. The conspicuous bright albedo heart-shaped feature informally named Tombaugh Regio is comprised of several terrain types. Most striking is Texas-sized Sputnik Planum, which is apparently level, has no observable craters, and is divided by polygons and ovoids bounded by shallow troughs. Small smooth hills are seen in some of the polygon-bounding troughs. These hills could either be extruded or exposed by erosion. Sputnik Planum polygon/ovoid formation hypotheses range from convection to contraction, but convection is currently favored. There is evidence of flow of plains material around obstacles. Mountains, especially those seen south of Sputnik Planum, exhibit too much relief to be made of CH4, CO, or N2, and thus are probably composed of H2O-ice basement material. The north contact of Sputnik Planum abuts a scarp, above which is heavily modified cratered terrain. Pluto's large moon Charon is generally heavily to moderately cratered. There is a mysterious structure in the arctic. Charon's surface is crossed by an extensive system of rift faults and graben. Some regions are smoother and less cratered, reminiscent of lunar maria. On such a plain are large isolated block mountains surrounded by moats. At this conference we will present highlights of the latest observations and analysis.

  2. A modern approach to superradiance

    DOE PAGES

    Endlich, Solomon; Penco, Riccardo

    2017-05-10

    In this paper, we provide a simple and modern discussion of rotational super-radiance based on quantum field theory. We work with an effective theory valid at scales much larger than the size of the spinning object responsible for superradiance. Within this framework, the probability of absorption by an object at rest completely determines the superradiant amplification rate when that same object is spinning. We first discuss in detail superradiant scattering of spin 0 particles with orbital angular momentum ℓ = 1, and then extend our analysis to higher values of orbital angular momentum and spin. Along the way, we providemore » a simple derivation of vacuum friction — a ''quantum torque'' acting on spinning objects in empty space. Our results apply not only to black holes but to arbitrary spinning objects. We also discuss superradiant instability due to formation of bound states and, as an illustration, we calculate the instability rate Γ for bound states with massive spin 1 particles. For a black hole with mass M and angular velocity Ω, we find Γ ~ (GMμ) 7Ω when the particle’s Compton wavelength 1/μ is much greater than the size GM of the spinning object. This rate is parametrically much larger than the instability rate for spin 0 particles, which scales like (GM μ) 9Ω. This enhanced instability rate can be used to constrain the existence of ultralight particles beyond the Standard Model.« less

  3. Rotational dynamics of spin-labeled F-actin during activation of myosin S1 ATPase using caged ATP.

    PubMed Central

    Ostap, E. M.; Thomas, D. D.

    1991-01-01

    The most probable source of force generation in muscle fibers in the rotation of the myosin head when bound to actin. This laboratory has demonstrated that ATP induces microsecond rotational motions of spin-labeled myosin heads bound to actin (Berger, C. L. E. C. Svensson, and D. D. Thomas. 1989. Proc. Natl. Acad. Sci. USA. 86:8753-8757). Our goal is to determine whether the observed ATP-induced rotational motions of actin-bound heads are accompanied by changes in actin rotational motions. We have used saturation transfer electron paramagnetic resonance (ST-EPR) and laser-induced photolysis of caged ATP to monitor changes in the microsecond rotational dynamics of spin-labeled F-actin in the presence of myosin subfragment-1 (S1). A maleimide spin label was attached selectively to cys-374 on actin. In the absence of ATP (with or without caged ATP), the ST-EPR spectrum (corresponding to an effective rotational time of approximately 150 microseconds) was essentially the same as observed for the same spin label bound to cys-707 (SH1) on S1, indicating that S1 is rigidly bound to actin in rigor. At normal ionic strength (micro = 186 mM), a decrease in ST-EPR intensity (increase in microsecond F-actin mobility) was clearly indicated upon photolysis of 1 mM caged ATP with a 50-ms, 351-nm laser pulse. This increase in mobility is due to the complete dissociation of Si from the actin filament. At low ionic strength (micro, = 36 mM), when about half the Si heads remain bound during ATP hydrolysis, no change in the actin mobility was detected, despite much faster motions of labeled S1 bound to actin. Therefore, we conclude that the active interaction of Si, actin,and ATP induces rotation of myosin heads relative to actin, but does not affect the microsecond rotational motion of actin itself, as detected at cys-374 of actin. PMID:1651780

  4. Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, J.; Bowman, J.M.

    1996-05-01

    The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azunre, P.

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  6. Assessment of safety distance between components of nuclear plant and study of the vulnerabiliy of the damage caused by an explosion

    NASA Astrophysics Data System (ADS)

    Ismaila, Aminu; Md Kasmani, Rafiziana; Meng-Hock, Koh; Termizi Ramli, Ahmad

    2017-10-01

    This paper deals with the assessment of external explosion, resulting from accidental release of jet fuel from the large commercial airliner in the nuclear power plant (NPP). The study used three widely prediction methods such as Trinitrotoluene (TNT), multi energy (TNO) and Baker-strehow (BST) to determine the unconfined vapour cloud explosion (UVCE) overpressure within the distances of 100-1400 m from the first impact location. The containment building was taken as the reference position. The fatalities of persons and damage of structures was estimated using probit methodology. Analysis of the results shows that both reactor building and control-room will be highly damaged with risk consequences and probability, depending on the assumed position of the crash. The structures at the radial distance of 600 m may suffer major structural damage with probability ranging from 25 to 100%. The minor structural damage was observed throughout the bounds of the plant complex. The people working within 250 m radius may get affected with different fatality ranging from 28 to 100%. The findings of this study is valuable to evaluate the safety improvement needed on the NPP site and on the risk and consequences associated with the hydrocarbon fuel release/fires due to external hazards.

  7. The Dolinar Receiver in an Information Theoretic Framework

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Birnbaum, Kevin M.; Moision, Bruce E.; Dolinar, Samuel J.

    2011-01-01

    Optical communication at the quantum limit requires that measurements on the optical field be maximally informative, but devising physical measurements that accomplish this objective has proven challenging. The Dolinar receiver exemplifies a rare instance of success in distinguishing between two coherent states: an adaptive local oscillator is mixed with the signal prior to photodetection, which yields an error probability that meets the Helstrom lower bound with equality. Here we apply the same local-oscillator-based architecture with aninformation-theoretic optimization criterion. We begin with analysis of this receiver in a general framework for an arbitrary coherent-state modulation alphabet, and then we concentrate on two relevant examples. First, we study a binary antipodal alphabet and show that the Dolinar receiver's feedback function not only minimizes the probability of error, but also maximizes the mutual information. Next, we study ternary modulation consistingof antipodal coherent states and the vacuum state. We derive an analytic expression for a near-optimal local oscillator feedback function, and, via simulation, we determine its photon information efficiency (PIE). We provide the PIE versus dimensional information efficiency (DIE) trade-off curve and show that this modulation and the our receiver combination performs universally better than (generalized) on-off keying plus photoncounting, although, the advantage asymptotically vanishes as the bits-per-photon diverges towards infinity.

  8. Higher order terms in the inflation potential and the lower bound on the tensor to scalar ratio r

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Destri, C., E-mail: Claudio.Destri@mib.infn.it; Vega, H.J. de, E-mail: devega@lpthe.jussieu.fr; Observatoire de Paris, LERMA, Laboratoire Associe au CNRS UMR 8112, 61, Avenue de l'Observatoire, 75014 Paris

    Research Highlights: > In Ginsburg-Landau (G-L) approach data favors new inflation over chaotic inflation. > n{sub s} and r fall inside a universal banana-shaped region in G-L new inflation. > The banana region for the observed value n{sub s}=0.964 implies 0.021 Fermion condensate inflaton potential is a double well in the G-L class. - Abstract: The MCMC analysis of the CMB + LSS data in the context of the Ginsburg-Landau approach to inflation indicated that the fourth degree double-well inflaton potential in new inflation gives an excellent fit of the present CMB and LSS data. This provided a lowermore » bound for the ratio r of the tensor to scalar fluctuations and as most probable value r {approx_equal} 0.05, within reach of the forthcoming CMB observations. In this paper we systematically analyze the effects of arbitrarily higher order terms in the inflaton potential on the CMB observables: spectral index n{sub s} and ratio r. Furthermore, we compute in close form the inflaton potential dynamically generated when the inflaton field is a fermion condensate in the inflationary universe. This inflaton potential turns out to belong to the Ginsburg-Landau class too. The theoretical values in the (n{sub s}, r) plane for all double well inflaton potentials in the Ginsburg-Landau approach (including the potential generated by fermions) fall inside a universal banana-shaped region B. The upper border of the banana-shaped region B is given by the fourth order double-well potential and provides an upper bound for the ratio r. The lower border of B is defined by the quadratic plus an infinite barrier inflaton potential and provides a lower bound for the ratio r. For example, the current best value of the spectral index n{sub s} = 0.964, implies r is in the interval: 0.021 < r < 0.053. Interestingly enough, this range is within reach of forthcoming CMB observations.« less

  9. Economic Analysis of the Impact of Overseas and Domestic Treatment and Screening Options for Intestinal Helminth Infection among US-Bound Refugees from Asia.

    PubMed

    Maskery, Brian; Coleman, Margaret S; Weinberg, Michelle; Zhou, Weigong; Rotz, Lisa; Klosovsky, Alexander; Cantey, Paul T; Fox, LeAnne M; Cetron, Martin S; Stauffer, William M

    2016-08-01

    Many U.S.-bound refugees travel from countries where intestinal parasites (hookworm, Trichuris trichuria, Ascaris lumbricoides, and Strongyloides stercoralis) are endemic. These infections are rare in the United States and may be underdiagnosed or misdiagnosed, leading to potentially serious consequences. This evaluation examined the costs and benefits of combinations of overseas presumptive treatment of parasitic diseases vs. domestic screening/treating vs. no program. An economic decision tree model terminating in Markov processes was developed to estimate the cost and health impacts of four interventions on an annual cohort of 27,700 U.S.-bound Asian refugees: 1) "No Program," 2) U.S. "Domestic Screening and Treatment," 3) "Overseas Albendazole and Ivermectin" presumptive treatment, and 4) "Overseas Albendazole and Domestic Screening for Strongyloides". Markov transition state models were used to estimate long-term effects of parasitic infections. Health outcome measures (four parasites) included outpatient cases, hospitalizations, deaths, life years, and quality-adjusted life years (QALYs). The "No Program" option is the least expensive ($165,923 per cohort) and least effective option (145 outpatient cases, 4.0 hospitalizations, and 0.67 deaths discounted over a 60-year period for a one-year cohort). The "Overseas Albendazole and Ivermectin" option ($418,824) is less expensive than "Domestic Screening and Treatment" ($3,832,572) or "Overseas Albendazole and Domestic Screening for Strongyloides" ($2,182,483). According to the model outcomes, the most effective treatment option is "Overseas Albendazole and Ivermectin," which reduces outpatient cases, deaths and hospitalization by around 80% at an estimated net cost of $458,718 per death averted, or $2,219/$24,036 per QALY/life year gained relative to "No Program". Overseas presumptive treatment for U.S.-bound refugees is a cost-effective intervention that is less expensive and at least as effective as domestic screening and treatment programs. The addition of ivermectin to albendazole reduces the prevalence of chronic strongyloidiasis and the probability of rare, but potentially fatal, disseminated strongyloidiasis.

  10. Economic Analysis of the Impact of Overseas and Domestic Treatment and Screening Options for Intestinal Helminth Infection among US-Bound Refugees from Asia

    PubMed Central

    Maskery, Brian; Coleman, Margaret S.; Weinberg, Michelle; Zhou, Weigong; Rotz, Lisa; Klosovsky, Alexander; Cantey, Paul T.; Fox, LeAnne M.; Cetron, Martin S.; Stauffer, William M.

    2016-01-01

    Background Many U.S.-bound refugees travel from countries where intestinal parasites (hookworm, Trichuris trichuria, Ascaris lumbricoides, and Strongyloides stercoralis) are endemic. These infections are rare in the United States and may be underdiagnosed or misdiagnosed, leading to potentially serious consequences. This evaluation examined the costs and benefits of combinations of overseas presumptive treatment of parasitic diseases vs. domestic screening/treating vs. no program. Methods An economic decision tree model terminating in Markov processes was developed to estimate the cost and health impacts of four interventions on an annual cohort of 27,700 U.S.-bound Asian refugees: 1) “No Program,” 2) U.S. “Domestic Screening and Treatment,” 3) “Overseas Albendazole and Ivermectin” presumptive treatment, and 4) “Overseas Albendazole and Domestic Screening for Strongyloides”. Markov transition state models were used to estimate long-term effects of parasitic infections. Health outcome measures (four parasites) included outpatient cases, hospitalizations, deaths, life years, and quality-adjusted life years (QALYs). Results The “No Program” option is the least expensive ($165,923 per cohort) and least effective option (145 outpatient cases, 4.0 hospitalizations, and 0.67 deaths discounted over a 60-year period for a one-year cohort). The “Overseas Albendazole and Ivermectin” option ($418,824) is less expensive than “Domestic Screening and Treatment” ($3,832,572) or “Overseas Albendazole and Domestic Screening for Strongyloides” ($2,182,483). According to the model outcomes, the most effective treatment option is “Overseas Albendazole and Ivermectin,” which reduces outpatient cases, deaths and hospitalization by around 80% at an estimated net cost of $458,718 per death averted, or $2,219/$24,036 per QALY/life year gained relative to “No Program”. Discussion Overseas presumptive treatment for U.S.-bound refugees is a cost-effective intervention that is less expensive and at least as effective as domestic screening and treatment programs. The addition of ivermectin to albendazole reduces the prevalence of chronic strongyloidiasis and the probability of rare, but potentially fatal, disseminated strongyloidiasis. PMID:27509077

  11. Bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations

    DOE PAGES

    Azunre, P.

    2016-09-21

    Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less

  12. A New Empirical Constraint on the Prevalence of Technological Species in the Universe.

    PubMed

    Frank, A; Sullivan, W T

    2016-05-01

    In this article, we address the cosmic frequency of technological species. Recent advances in exoplanet studies provide strong constraints on all astrophysical terms in the Drake equation. Using these and modifying the form and intent of the Drake equation, we set a firm lower bound on the probability that one or more technological species have evolved anywhere and at any time in the history of the observable Universe. We find that as long as the probability that a habitable zone planet develops a technological species is larger than ∼10(-24), humanity is not the only time technological intelligence has evolved. This constraint has important scientific and philosophical consequences. Life-Intelligence-Extraterrestrial life. Astrobiology 2016, 359-362.

  13. Target annihilation by diffusing particles in inhomogeneous geometries

    NASA Astrophysics Data System (ADS)

    Cassi, Davide

    2009-09-01

    The survival probability of immobile targets annihilated by a population of random walkers on inhomogeneous discrete structures, such as disordered solids, glasses, fractals, polymer networks, and gels, is analytically investigated. It is shown that, while it cannot in general be related to the number of distinct visited points as in the case of homogeneous lattices, in the case of bounded coordination numbers its asymptotic behavior at large times can still be expressed in terms of the spectral dimension d˜ and its exact analytical expression is given. The results show that the asymptotic survival probability is site-independent of recurrent structures (d˜≤2) , while on transient structures (d˜>2) it can strongly depend on the target position, and such dependence is explicitly calculated.

  14. Kinetics of removal of intravenous testosterone pulses in normal men.

    PubMed

    Veldhuis, Johannes D; Keenan, Daniel M; Liu, Peter Y; Takahashi, Paul Y

    2010-04-01

    Testosterone is secreted into the bloodstream episodically, putatively distributing into total, bioavailable (bio) nonsex hormone-binding globulin (nonSHBG-bound), and free testosterone moieties. The kinetics of total, bio, and free testosterone pulses are unknown. Design Adrenal and gonadal steroidogenesis was blocked pharmacologically, glucocorticoid was replaced, and testosterone was infused in pulses in four distinct doses in 14 healthy men under two different paradigms (a total of 220 testosterone pulses). Testosterone kinetics were assessed by deconvolution analysis of total, free, bioavailable, SHBG-bound, and albumin-bound testosterone concentration-time profiles. Independently of testosterone dose or paradigm, rapid-phase half-lives (min) of total, free, bioavailable, SHBG-bound, and albumin-bound testosterone were comparable at 1.4+/-0.22 min (grand mean+/-S.E.M. of geometric means). Slow-phase testosterone half-lives were highest for SHBG-bound testosterone (32 min) and total testosterone (27 min) with the former exceeding that of free testosterone (18 min), bioavailable testosterone (14 min), and albumin-bound testosterone (18 min; P<0.001). Collective outcomes indicate that i) the rapid phase of testosterone disappearance from point sampling in the circulation is not explained by testosterone dose; ii) SHBG-bound testosterone and total testosterone kinetics are prolonged; and iii) the half-lives of bioavailable, albumin-bound, and free testosterone are short. A frequent-sampling strategy comprising an experimental hormone clamp, estimation of hormone concentrations as bound and free moieties, mimicry of physiological pulses, and deconvolution analysis may have utility in estimating the in vivo kinetics of other hormones, substrates, and metabolites.

  15. Offsite radiological consequence analysis for the bounding flammable gas accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CARRO, C.A.

    2003-03-19

    The purpose of this analysis is to calculate the offsite radiological consequence of the bounding flammable gas accident. DOE-STD-3009-94, ''Preparation Guide for U.S. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', requires the formal quantification of a limited subset of accidents representing a complete set of bounding conditions. The results of these analyses are then evaluated to determine if they challenge the DOE-STD-3009-94, Appendix A, ''Evaluation Guideline,'' of 25 rem total effective dose equivalent in order to identify and evaluate safety class structures, systems, and components. The bounding flammable gas accident is a detonation in a single-shell tank (SST).more » A detonation versus a deflagration was selected for analysis because the faster flame speed of a detonation can potentially result in a larger release of respirable material. As will be shown, the consequences of a detonation in either an SST or a double-shell tank (DST) are approximately equal. A detonation in an SST was selected as the bounding condition because the estimated respirable release masses are the same and because the doses per unit quantity of waste inhaled are generally greater for SSTs than for DSTs. Appendix A contains a DST analysis for comparison purposes.« less

  16. Theoretical derivation of laser-dressed atomic states by using a fractal space

    NASA Astrophysics Data System (ADS)

    Duchateau, Guillaume

    2018-05-01

    The derivation of approximate wave functions for an electron submitted to both a Coulomb and a time-dependent laser electric fields, the so-called Coulomb-Volkov (CV) state, is addressed. Despite its derivation for continuum states does not exhibit any particular problem within the framework of the standard theory of quantum mechanics (QM), difficulties arise when considering an initially bound atomic state. Indeed the natural way of translating the unperturbed momentum by the laser vector potential is no longer possible since a bound state does not exhibit a plane wave form explicitly including a momentum. The use of a fractal space permits to naturally define a momentum for a bound wave function. Within this framework, it is shown how the derivation of laser-dressed bound states can be performed. Based on a generalized eikonal approach, a new expression for the laser-dressed states is also derived, fully symmetric relative to the continuum or bound nature of the initial unperturbed wave function. It includes an additional crossed term in the Volkov phase which was not obtained within the standard theory of quantum mechanics. The derivations within this fractal framework have highlighted other possible ways to derive approximate laser-dressed states in QM. After comparing the various obtained wave functions, an application to the prediction of the ionization probability of hydrogen targets by attosecond XUV pulses within the sudden approximation is provided. This approach allows to make predictions in various regimes depending on the laser intensity, going from the non-resonant multiphoton absorption to tunneling and barrier-suppression ionization.

  17. Assessing the Effect of Stellar Companions from High-resolution Imaging of Kepler Objects of Interest

    NASA Astrophysics Data System (ADS)

    Hirsch, Lea A.; Ciardi, David R.; Howard, Andrew W.; Everett, Mark E.; Furlan, Elise; Saylors, Mindy; Horch, Elliott P.; Howell, Steve B.; Teske, Johanna; Marcy, Geoffrey W.

    2017-03-01

    We report on 176 close (<2″) stellar companions detected with high-resolution imaging near 170 hosts of Kepler Objects of Interest (KOIs). These Kepler targets were prioritized for imaging follow-up based on the presence of small planets, so most of the KOIs in these systems (176 out of 204) have nominal radii <6 {R}\\oplus . Each KOI in our sample was observed in at least two filters with adaptive optics, speckle imaging, lucky imaging, or the Hubble Space Telescope. Multi-filter photometry provides color information on the companions, allowing us to constrain their stellar properties and assess the probability that the companions are physically bound. We find that 60%-80% of companions within 1″ are bound, and the bound fraction is >90% for companions within 0.″5 the bound fraction decreases with increasing angular separation. This picture is consistent with simulations of the binary and background stellar populations in the Kepler field. We also reassess the planet radii in these systems, converting the observed differential magnitudes to a contamination in the Kepler bandpass and calculating the planet radius correction factor, X R = R p (true)/R p (single). Under the assumption that planets in bound binaries are equally likely to orbit the primary or secondary, we find a mean radius correction factor for planets in stellar multiples of X R = 1.65. If stellar multiplicity in the Kepler field is similar to the solar neighborhood, then nearly half of all Kepler planets may have radii underestimated by an average of 65%, unless vetted using high-resolution imaging or spectroscopy.

  18. A generalized partially linear mean-covariance regression model for longitudinal proportional data, with applications to the analysis of quality of life data from cancer clinical trials.

    PubMed

    Zheng, Xueying; Qin, Guoyou; Tu, Dongsheng

    2017-05-30

    Motivated by the analysis of quality of life data from a clinical trial on early breast cancer, we propose in this paper a generalized partially linear mean-covariance regression model for longitudinal proportional data, which are bounded in a closed interval. Cholesky decomposition of the covariance matrix for within-subject responses and generalized estimation equations are used to estimate unknown parameters and the nonlinear function in the model. Simulation studies are performed to evaluate the performance of the proposed estimation procedures. Our new model is also applied to analyze the data from the cancer clinical trial that motivated this research. In comparison with available models in the literature, the proposed model does not require specific parametric assumptions on the density function of the longitudinal responses and the probability function of the boundary values and can capture dynamic changes of time or other interested variables on both mean and covariance of the correlated proportional responses. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Analysis of asteroid (216) Kleopatra using dynamical and structural constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirabayashi, Masatoshi; Scheeres, Daniel J., E-mail: masatoshi.hirabayashi@colorado.edu

    This paper evaluates a dynamically and structurally stable size for Asteroid (216) Kleopatra. In particular, we investigate two different failure modes: material shedding from the surface and structural failure of the internal body. We construct zero-velocity curves in the vicinity of this asteroid to determine surface shedding, while we utilize a limit analysis to calculate the lower and upper bounds of structural failure under the zero-cohesion assumption. Surface shedding does not occur at the current spin period (5.385 hr) and cannot directly initiate the formation of the satellites. On the other hand, this body may be close to structural failure;more » in particular, the neck may be situated near a plastic state. In addition, the neck's sensitivity to structural failure changes as the body size varies. We conclude that plastic deformation has probably occurred around the neck part in the past. If the true size of this body is established through additional measurements, this method will provide strong constraints on the current friction angle for the body.« less

  20. The voluntary-threat approach to control nonpoint source pollution under uncertainty.

    PubMed

    Li, Youping

    2013-11-15

    This paper extends the voluntary-threat approach of Segerson and Wu (2006) to the case that the ambient level of nonpoint source pollution is stochastic. It is shown that when the random component is bounded from the above, fine-tuning the cutoff value of the tax payments avoids the actual imposition of the tax while the threat of such payments retains necessary incentive for the polluters to engage in abatements at the optimal level. If the random component is not bounded, the imposition of the tax cannot be completely avoided but the probability can be reduced by setting a higher cutoff value. It is also noted that the regulator has additional flexibility in randomizing the tax imposition but the randomization process has to be credible. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Signal processing of white-light interferometric low-finesse fiber-optic Fabry-Perot sensors.

    PubMed

    Ma, Cheng; Wang, Anbo

    2013-01-10

    Signal processing for low-finesse fiber-optic Fabry-Perot sensors based on white-light interferometry is investigated. The problem is demonstrated as analogous to the parameter estimation of a noisy, real, discrete harmonic of finite length. The Cramer-Rao bounds for the estimators are given, and three algorithms are evaluated and proven to approach the bounds. A long-standing problem with these types of sensors is the unpredictable jumps in the phase estimation. Emphasis is made on the property and mechanism of the "total phase" estimator in reducing the estimation error, and a varying phase term in the total phase is identified to be responsible for the unwanted demodulation jumps. The theories are verified by simulation and experiment. A solution to reducing the probability of jump is demonstrated. © 2013 Optical Society of America

  2. Adaptive Neural Output Feedback Control for Nonstrict-Feedback Stochastic Nonlinear Systems With Unknown Backlash-Like Hysteresis and Unknown Control Directions.

    PubMed

    Yu, Zhaoxu; Li, Shugang; Yu, Zhaosheng; Li, Fangfei

    2018-04-01

    This paper investigates the problem of output feedback adaptive stabilization for a class of nonstrict-feedback stochastic nonlinear systems with both unknown backlashlike hysteresis and unknown control directions. A new linear state transformation is applied to the original system, and then, control design for the new system becomes feasible. By combining the neural network's (NN's) parameterization, variable separation technique, and Nussbaum gain function method, an input-driven observer-based adaptive NN control scheme, which involves only one parameter to be updated, is developed for such systems. All closed-loop signals are bounded in probability and the error signals remain semiglobally bounded in the fourth moment (or mean square). Finally, the effectiveness and the applicability of the proposed control design are verified by two simulation examples.

  3. Two-dimensional description of surface-bounded exospheres with application to the migration of water molecules on the Moon

    NASA Astrophysics Data System (ADS)

    Schorghofer, Norbert

    2015-05-01

    On the Moon, water molecules and other volatiles are thought to migrate along ballistic trajectories. Here, this migration process is described in terms of a two-dimensional partial differential equation for the surface concentration, based on the probability distribution of thermal ballistic hops. A random-walk model, a corresponding diffusion coefficient, and a continuum description are provided. In other words, a surface-bounded exosphere is described purely in terms of quantities on the surface, which can provide computational and conceptual advantages. The derived continuum equation can be used to calculate the steady-state distribution of the surface concentration of volatile water molecules. An analytic steady-state solution is obtained for an equatorial ring; it reveals the width and mass of the pileup of molecules at the morning terminator.

  4. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE PAGES

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    2017-08-19

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  5. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  6. Lipase of Aspergillus niger NCIM 1207: A Potential Biocatalyst for Synthesis of Isoamyl Acetate.

    PubMed

    Mhetras, Nutan; Patil, Sonal; Gokhale, Digambar

    2010-10-01

    Commercial lipase preparations and mycelium bound lipase from Aspergillus niger NCIM 1207 were used for esterification of acetic acid with isoamyl alcohol to obtain isoamyl acetate. The esterification reaction was carried out at 30°C in n-hexane with shaking at 120 rpm. Initial reaction rates, conversion efficiency and isoamyl acetate concentration obtained using Novozyme 435 were the highest. Mycelium bound lipase of A. niger NCIM 1207 produced maximal isoamyl acetate formation at an alcohol/acid ratio of 1.6. Acetic acid at higher concentrations than required for the critical alcohol/acid ratio lower than 1.3 and higher than 1.6 resulted in decreased yields of isoamyl acetate probably owing to lowering of micro-aqueous environmental pH around the enzyme leading to inhibition of enzyme activity. Mycelium bound A. niger lipase produced 80 g/l of isoamyl acetate within 96 h even though extremely less amount of enzyme activity was used for esterification. The presence of sodium sulphate during esterification reaction at higher substrate concentration resulted in increased conversion efficiency when we used mycelium bound enzyme preparations of A. niger NCIM 1207. This could be due to removal of excess water released during esterification reaction by sodium sulphate. High ester concentration (286.5 g/l) and conversion (73.5%) were obtained within 24 h using Novozyme 435 under these conditions.

  7. Determination of particle-bound polycyclic aromatic hydrocarbons emitted from co-pelletization combustion of lignite and rubber wood sawdust

    NASA Astrophysics Data System (ADS)

    Kan, R.; Kaosol, T.; Tekasakul, P.; Tekasakul, S.

    2017-09-01

    Determination of particle-bound Polycyclic Aromatic Hydrocarbons (PAHs) emitted from co-pelletization combustion of lignite and rubber wood sawdust in a horizontal tube furnace is investigated using High Performance Liquid Chromatography with coupled Diode Array and Fluorescence Detection (HPLC-DAD/FLD). The particle-bound PAHs based on the mass concentration and the toxicity degree are discussed in the different size ranges of the particulate matters from 0.07-11 μm. In the present study, the particle-bound PAHs are likely abundant in the fine particles. More than 70% of toxicity degree of PAHs falls into PM1.1 while more than 80% of mass concentration of PAHs falls into PM2.5. The addition of lignite amount in the co-pelletization results in the increasing concentration of either 4-6 aromatic ring PAHs or high molecular weight PAHs. The high contribution of 4-6 aromatic ring PAHs or high molecular weight PAHs in the fine particles should be paid much more attention because of high probability of human carcinogenic. Furthermore, the rubber wood sawdust pellets emit high mass concentration of PAHs whereas the lignite pellets emit high toxicity degree of PAHs. By co-pelletized rubber wood sawdust with lignite (50% lignite pellets) has significant effect to reduce the toxicity degree of PAHs by 70%.

  8. Unambiguous quantum-state filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Sasaki, Masahide; CREST, Japan Science and Technology Corporation, Tokyo,

    2003-07-01

    In this paper, we consider a generalized measurement where one particular quantum signal is unambiguously extracted from a set of noncommutative quantum signals and the other signals are filtered out. Simple expressions for the maximum detection probability and its positive operator valued measure are derived. We apply such unambiguous quantum state filtering to evaluation of the sensing of decoherence channels. The bounds of the precision limit for a given quantum state of probes and possible device implementations are discussed.

  9. Anomalous Diffusion Approximation of Risk Processes in Operational Risk of Non-Financial Corporations

    NASA Astrophysics Data System (ADS)

    Magdziarz, M.; Mista, P.; Weron, A.

    2007-05-01

    We introduce an approximation of the risk processes by anomalous diffusion. In the paper we consider the case, where the waiting times between successive occurrences of the claims belong to the domain of attraction of alpha -stable distribution. The relationship between the obtained approximation and the celebrated fractional diffusion equation is emphasised. We also establish upper bounds for the ruin probability in the considered model and give some numerical examples.

  10. Estimating the number of terrestrial organisms on the moon.

    NASA Technical Reports Server (NTRS)

    Dillon, R. T.; Gavin, W. R.; Roark, A. L.; Trauth, C. A., Jr.

    1973-01-01

    Methods used to obtain estimates for the biological loadings on moon bound spacecraft prior to launch are reviewed, along with the mathematical models used to calculate the microorganism density on the lunar surface (such as it results from contamination deposited by manned and unmanned flights) and the probability of lunar soil sample contamination. Some of the results obtained by the use of a lunar inventory system based on these models are presented.

  11. Exploring the sensitivity of current and future experiments to θ⊙

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati

    2003-06-01

    The first results from the KamLAND experiment in conjunction with the global solar neutrino data have demonstrated the striking ability to constrain the Δm2⊙ (Δm221) very precisely. However the allowed range of θ⊙ (θ12) did not change much with the inclusion of the KamLAND results. In this paper we probe if future data from KamLAND can increase the accuracy of the allowed range in θ⊙ and conclude that even after 3 kton yr of statistics and with the most optimistic error estimates, KamLAND may find it hard to significantly improve the bounds on the mixing angle obtained from the solar neutrino data. We discuss the θ12 sensitivity of the survival probabilities in matter (vacuum) as relevant for the solar (KamLAND) experiments. We find that the presence of matter effects in the survival probabilities for 8B neutrinos gives the solar neutrino experiments SK and SNO an edge over KamLAND, as far as θ12 sensitivity is concerned, particularly near the maximal mixing. Among solar neutrino experiments we identify SNO as a most promising candidate for constraining θ12 and make a projected sensitivity test for the mixing angle by reducing the error in the neutral current measurement at SNO. Finally, we argue that the most accurate bounds on θ12 can be achieved in a reactor experiment, if the corresponding baseline and energy can be tuned to a minimum in the survival probability. We propose a new reactor experiment that can give the value of tan2θ12 to within 14%. We also discuss the future Borexino and LowNu experiments.

  12. Development and Transition of the Radiation, Interplanetary Shocks, and Coronal Sources (RISCS) Toolset

    NASA Technical Reports Server (NTRS)

    Spann, James F.; Zank, G.

    2014-01-01

    We outline a plan to develop and transition a physics based predictive toolset called The Radiation, Interplanetary Shocks, and Coronal Sources (RISCS) to describe the interplanetary energetic particle and radiation environment throughout the inner heliosphere, including at the Earth. To forecast and "nowcast" the radiation environment requires the fusing of three components: 1) the ability to provide probabilities for incipient solar activity; 2) the use of these probabilities and daily coronal and solar wind observations to model the 3D spatial and temporal heliosphere, including magnetic field structure and transients, within 10 Astronomical Units; and 3) the ability to model the acceleration and transport of energetic particles based on current and anticipated coronal and heliospheric conditions. We describe how to address 1) - 3) based on our existing, well developed, and validated codes and models. The goal of RISCS toolset is to provide an operational forecast and "nowcast" capability that will a) predict solar energetic particle (SEP) intensities; b) spectra for protons and heavy ions; c) predict maximum energies and their duration; d) SEP composition; e) cosmic ray intensities, and f) plasma parameters, including shock arrival times, strength and obliquity at any given heliospheric location and time. The toolset would have a 72 hour predicative capability, with associated probabilistic bounds, that would be updated hourly thereafter to improve the predicted event(s) and reduce the associated probability bounds. The RISCS toolset would be highly adaptable and portable, capable of running on a variety of platforms to accommodate various operational needs and requirements. The described transition plan is based on a well established approach developed in the Earth Science discipline that ensures that the customer has a tool that meets their needs

  13. Alternate methods for FAAT S-curve generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, A.M.

    The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less

  14. Robust Bounded Influence Tests in Linear Models

    DTIC Science & Technology

    1988-11-01

    sensitivity analysis and bounded influence estimation. In: Evaluation of Econometric Models, J. Kmenta and J.B. Ramsey (eds.) Academic Press, New York...1R’OBUST bOUNDED INFLUENCE TESTS IN LINEA’ MODELS and( I’homas P. [lettmansperger* Tim [PennsylvanLa State UJniversity A M i0d fix pu111 rsos.p JJ 1 0...November 1988 ROBUST BOUNDED INFLUENCE TESTS IN LINEAR MODELS Marianthi Markatou The University of Iowa and Thomas P. Hettmansperger* The Pennsylvania

  15. Curvature bound from gravitational catalysis

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Martini, Riccardo

    2018-04-01

    We determine bounds on the curvature of local patches of spacetime from the requirement of intact long-range chiral symmetry. The bounds arise from a scale-dependent analysis of gravitational catalysis and its influence on the effective potential for the chiral order parameter, as induced by fermionic fluctuations on a curved spacetime with local hyperbolic properties. The bound is expressed in terms of the local curvature scalar measured in units of a gauge-invariant coarse-graining scale. We argue that any effective field theory of quantum gravity obeying this curvature bound is safe from chiral symmetry breaking through gravitational catalysis and thus compatible with the simultaneous existence of chiral fermions in the low-energy spectrum. With increasing number of dimensions, the curvature bound in terms of the hyperbolic scale parameter becomes stronger. Applying the curvature bound to the asymptotic safety scenario for quantum gravity in four spacetime dimensions translates into bounds on the matter content of particle physics models.

  16. NFI Transcription Factors Interact with FOXA1 to Regulate Prostate-Specific Gene Expression

    PubMed Central

    Elliott, Amicia D.; DeGraff, David J.; Anderson, Philip D.; Anumanthan, Govindaraj; Yamashita, Hironobu; Sun, Qian; Friedman, David B.; Hachey, David L.; Yu, Xiuping; Sheehan, Jonathan H.; Ahn, Jung-Mo; Raj, Ganesh V.; Piston, David W.; Gronostajski, Richard M.; Matusik, Robert J.

    2014-01-01

    Androgen receptor (AR) action throughout prostate development and in maintenance of the prostatic epithelium is partly controlled by interactions between AR and forkhead box (FOX) transcription factors, particularly FOXA1. We sought to identity additional FOXA1 binding partners that may mediate prostate-specific gene expression. Here we identify the nuclear factor I (NFI) family of transcription factors as novel FOXA1 binding proteins. All four family members (NFIA, NFIB, NFIC, and NFIX) can interact with FOXA1, and knockdown studies in androgen-dependent LNCaP cells determined that modulating expression of NFI family members results in changes in AR target gene expression. This effect is probably mediated by binding of NFI family members to AR target gene promoters, because chromatin immunoprecipitation (ChIP) studies found that NFIB bound to the prostate-specific antigen enhancer. Förster resonance energy transfer studies revealed that FOXA1 is capable of bringing AR and NFIX into proximity, indicating that FOXA1 facilitates the AR and NFI interaction by bridging the complex. To determine the extent to which NFI family members regulate AR/FOXA1 target genes, motif analysis of publicly available data for ChIP followed by sequencing was undertaken. This analysis revealed that 34.4% of peaks bound by AR and FOXA1 contain NFI binding sites. Validation of 8 of these peaks by ChIP revealed that NFI family members can bind 6 of these predicted genomic elements, and 4 of the 8 associated genes undergo gene expression changes as a result of individual NFI knockdown. These observations suggest that NFI regulation of FOXA1/AR action is a frequent event, with individual family members playing distinct roles in AR target gene expression. PMID:24801505

  17. Out of Bounds: Innovation and Change in Law Enforcement Intelligence Analysis

    DTIC Science & Technology

    2006-03-01

    Community capabilities for policy-level and operational consumers Out of Bounds: Innovation and Change in Law Enforcement Intelligence Analysis...31 4. What Works: Relationships and Sharing...will steer or fund these individuals into pursuing its goals.38 The growing need to understand the nature of low-level crime in relationship to terrorist

  18. Spinodal Decomposition for the Cahn-Hilliard Equation in Higher Dimensions.Part I: Probability and Wavelength Estimate

    NASA Astrophysics Data System (ADS)

    Maier-Paape, Stanislaus; Wanner, Thomas

    This paper is the first in a series of two papers addressing the phenomenon of spinodal decomposition for the Cahn-Hilliard equation where , is a bounded domain with sufficiently smooth boundary, and f is cubic-like, for example f(u) =u-u3. We will present the main ideas of our approach and explain in what way our method differs from known results in one space dimension due to Grant [26]. Furthermore, we derive certain probability and wavelength estimates. The probability estimate is needed to understand why in a neighborhood of a homogeneous equilibrium u0≡μ of the Cahn-Hilliard equation, with mass μ in the spinodal region, a strongly unstable manifold has dominating effects. This is demonstrated for the linearized equation, but will be essential for the nonlinear setting in the second paper [37] as well. Moreover, we introduce the notion of a characteristic wavelength for the strongly unstable directions.

  19. BOUNDS ON LEPTON FLAVOR CHANGING CURRENTS AND THE SOLAR NEUTRINO PUZZLE:. Bounds on Lepton Flavor Changing Currents

    NASA Astrophysics Data System (ADS)

    degl'Innocenti, Scilla; Ricci, Barbara

    We present a phenomenological analysis of a lepton flavor changing current, considering the case of interactions among leptons which change the neutrino flavor and are diagonal in the charged lepton sector. In the case of νe↔νµ transition, we derive a bound on the vector coupling constant GV≤0.16 GF from experimental data on νµ-e scattering. For a transition νe↔νx, from (anti) νe-e scattering experiments and from the analysis of advanced stellar evolutionary phases, we find GV≤0.55 GF. We discuss the compatibility of these data with a possible explanation of the solar neutrino puzzle. We also analyze how the present bounds can be improved in future long baseline neutrino experiments and atmospheric neutrino detectors.

  20. Aspects of neutrino oscillation in alternative gravity theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Sumanta, E-mail: sumantac.physics@gmail.com

    2015-10-01

    Neutrino spin and flavour oscillation in curved spacetime have been studied for the most general static spherically symmetric configuration. Having exploited the spherical symmetry we have confined ourselves to the equatorial plane in order to determine the spin and flavour oscillation frequency in this general set-up. Using the symmetry properties we have derived spin oscillation frequency for neutrino moving along a geodesic or in a circular orbit. Starting from the expression of neutrino spin oscillation frequency we have shown that even in this general context, in high energy limit the spin oscillation frequency for neutrino moving along circular orbit vanishes.more » We have verified previous results along this line by transforming to Schwarzschild coordinates under appropriate limit. This finally lends itself to the probability of neutrino helicity flip which turns out to be non-zero. While for neutrino flavour oscillation we have derived general results for oscillation phase, which subsequently have been applied to three different gravity theories. One, of them appears as low-energy approximation to string theory, where we have an additional field, namely, dilaton field coupled to Maxwell field tensor. This yields a realization of Reissner-Nordström solution in string theory at low-energy. Next one corresponds to generalization of Schwarzschild solution by introduction of quadratic curvature terms of all possible form to the Einstein-Hilbert action. Finally, we have also discussed regular black hole solutions. In all these cases the flavour oscillation probabilities can be determined for solar neutrinos and thus can be used to put bounds on the parameters of these gravity theories. While for spin oscillation probability, we have considered two cases, Gauss-Bonnet term added to the Einstein-Hilbert action and the f(R) gravity theory. In both these cases we could impose bounds on the parameters which are consistent with previous considerations. In a nutshell, in this work we have presented both spin and flavour oscillation frequency of neutrino in most general static spherically symmetric spacetime, encompassing a vast class of solutions, which when applied to three such instances in alternative theories for flavour oscillation and two alternative theories for spin oscillation put bounds on the parameters of these theories. Implications are also discussed.« less

  1. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  2. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  3. Homogenization-based interval analysis for structural-acoustic problem involving periodical composites and multi-scale uncertain-but-bounded parameters.

    PubMed

    Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong

    2017-04-01

    This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.

  4. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    NASA Astrophysics Data System (ADS)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  5. A coarse-to-fine approach for pericardial effusion localization and segmentation in chest CT scans

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Chellamuthu, Karthik; Lu, Le; Bagheri, Mohammadhadi; Summers, Ronald M.

    2018-02-01

    Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59+/-12.04%, which is significantly better than the segmentation accuracy (62.74+/-15.20%) of only using the coarse-scaled HNN model.

  6. Trellis phase codes for power-bandwith efficient satellite communications

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.

    1981-01-01

    Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.

  7. DFT Study on the Complexation of Bambus[6]uril with the Perchlorate and Tetrafluoroborate Anions.

    PubMed

    Toman, Petr; Makrlík, Emanuel; Vaňura, Petr

    2011-12-01

    By using quantum mechanical DFT calculations, the most probable structures of the bambus[6]uril.ClO4- and bambus[6]uril.BF4- anionic complex species were derived. In these two complexes having C3 symmetry, each of the considered anions, included in the macrocyclic cavity, is bound by 12 weak hydrogen bonds between methine hydrogen atoms on the convex face of glycoluril units and the respective anion.

  8. A Lower Bound to the Probability of Choosing the Optimal Passing Score for a Mastery Test When There is an External Criterion [and] Estimating the Parameters of the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    A mastery test is frequently described as follows: an examinee responds to n dichotomously scored test items. Depending upon the examinee's observed (number correct) score, a mastery decision is made and the examinee is advanced to the next level of instruction. Otherwise, a nonmastery decision is made and the examinee is given remedial work. This…

  9. Gravitational Instabilities in Disks: From Polytropes to Protoplanets?

    NASA Astrophysics Data System (ADS)

    Durisen, R. H.

    2004-12-01

    Gravitational instabilities (GI's) probably occur in disks around young stellar objects during their early embedded phase. This paper reviews what is known about the nonlinear consequences of GI's for planet formation and disk evolution. All researchers agree that, for sufficiently fast cooling, disks fragment into dense clumps or arclike structures, but there is no universal agreement about whether fast enough cooling to cause fragmentation ever occurs and, if it does, whether any clumps that form will become bound protoplanets.

  10. Exact Solution of the Markov Propagator for the Voter Model on the Complete Graph

    DTIC Science & Technology

    2014-07-01

    distribution of the random walk. This process can also be applied to other models, incomplete graphs, or to multiple dimensions. An advantage of this...since any multiple of an eigenvector remains an eigenvector. Without any loss, let bk = 1. Now we can ascertain the explicit solution for bj when k < j...this bound is valid for all initial probability distributions. However, without detailed information about the eigenvectors, we cannot extract more

  11. Carbohydrate digestion in Lutzomyia longipalpis' larvae (Diptera - Psychodidae).

    PubMed

    Vale, Vladimir F; Moreira, Bruno H; Moraes, Caroline S; Pereira, Marcos H; Genta, Fernando A; Gontijo, Nelder F

    2012-10-01

    Lutzomyia longipalpis is the principal species of phlebotomine incriminated as vector of Leishmania infantum, the etiological agent of visceral leishmaniasis in the Americas. Despite its importance as vector, almost nothing related to the larval biology, especially about its digestive system has been published. The objective of the present study was to obtain an overview of carbohydrate digestion by the larvae. Taking in account that phlebotomine larvae live in the soil rich in decaying materials and microorganisms we searched principally for enzymes capable to hydrolyze carbohydrates present in this kind of substrate. The principal carbohydrases encountered in the midgut were partially characterized. One of them is a α-amylase present in the anterior midgut. It is probably involved with the digestion of glycogen, the reserve carbohydrate of fungi. Two other especially active enzymes were present in the posterior midgut, a membrane bound α-glucosidase and a membrane bound trehalase. The first, complete the digestion of glycogen and the other probably acts in the digestion of trehalose, a carbohydrate usually encountered in microorganisms undergoing hydric stress. In a screening done with the use of p-nitrophenyl-derived substrates other less active enzymes were also observed in the midgut. A general view of carbohydrate digestion in L. longipalpis was presented. Our results indicate that soil microorganisms appear to be the main source of nutrients for the larvae. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. VizieR Online Data Catalog: Close encounters to the Sun in Gaia DR1 (Bailer-Jones, 2018)

    NASA Astrophysics Data System (ADS)

    Bailer-Jones, C. A. L.

    2017-08-01

    The table gives the perihelion (closest approach) parameters of stars in the Gaia-DR1 TGAS catalogue which are found by numerical integration through a Galactic potential to approach within 10pc of the Sun. These parameters are the time (relative to the Gaia measurement epoch), heliocentric distance, and heliocentric speed of the star at perihelion. Uncertainties in these have been calculated by a Monte Carlo sampling of the data to give the posterior probability density function (PDF) over the parameters. For each parameter three summary values of this PDF are reported: the median, the 5% lower bound, the 95% upper bound. The latter two give a 90% confidence interval. The table also reports the probability that each star approaches the Sun within 0.5, 1.0, and 2.0pc, as well as the measured parallax, proper motion, and radial velocity (plus uncertainties) of the stars. Table 3 in the article lists the first 20 lines of this data table (stars with median perihelion distances below 2pc). Some stars are duplicated in this table, i.e. there are rows with the same ID, but different data. Stars with problematic data have not been removed, so some encounters are not reliable. Most IDs are Tycho, but in a few cases they are Hipparcos. (1 data file).

  13. A search theory model of patch-to-patch forager movement with application to pollinator-mediated gene flow.

    PubMed

    Hoyle, Martin; Cresswell, James E

    2007-09-07

    We present a spatially implicit analytical model of forager movement, designed to address a simple scenario common in nature. We assume minimal depression of patch resources, and discrete foraging bouts, during which foragers fill to capacity. The model is particularly suitable for foragers that search systematically, foragers that deplete resources in a patch only incrementally, and for sit-and-wait foragers, where harvesting does not affect the rate of arrival of forage. Drawing on the theory of job search from microeconomics, we estimate the expected number of patches visited as a function of just two variables: the coefficient of variation of the rate of energy gain among patches, and the ratio of the expected time exploiting a randomly chosen patch and the expected time travelling between patches. We then consider the forager as a pollinator and apply our model to estimate gene flow. Under model assumptions, an upper bound for animal-mediated gene flow between natural plant populations is approximately proportional to the probability that the animal rejects a plant population. In addition, an upper bound for animal-mediated gene flow in any animal-pollinated agricultural crop from a genetically modified (GM) to a non-GM field is approximately proportional to the proportion of fields that are GM and the probability that the animal rejects a field.

  14. Identification of aryl hydrocarbon receptor binding targets in mouse hepatic tissue treated with 2,3,7,8-tetrachlorodibenzo-p-dioxin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Raymond; Celius, Trine; Forgacs, Agnes L.

    2011-11-15

    Genome-wide, promoter-focused ChIP-chip analysis of hepatic aryl hydrocarbon receptor (AHR) binding sites was conducted in 8-week old female C57BL/6 treated with 30 {mu}g/kg/body weight 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) for 2 h and 24 h. These studies identified 1642 and 508 AHR-bound regions at 2 h and 24 h, respectively. A total of 430 AHR-bound regions were common between the two time points, corresponding to 403 unique genes. Comparison with previous AHR ChIP-chip studies in mouse hepatoma cells revealed that only 62 of the putative target genes overlapped with the 2 h AHR-bound regions in vivo. Transcription factor binding site analysis revealed anmore » over-representation of aryl hydrocarbon response elements (AHREs) in AHR-bound regions with 53% (2 h) and 68% (24 h) of them containing at least one AHRE. In addition to AHREs, E2f-Myc activator motifs previously implicated in AHR function, as well as a number of other motifs, including Sp1, nuclear receptor subfamily 2 factor, and early growth response factor motifs were also identified. Expression microarray studies identified 133 unique genes differentially regulated after 4 h treatment with TCDD. Of which, 39 were identified as AHR-bound genes at 2 h. Ingenuity Pathway Analysis on the 39 AHR-bound TCDD responsive genes identified potential perturbation in biological processes such as lipid metabolism, drug metabolism, and endocrine system development as a result of TCDD-mediated AHR activation. Our findings identify direct AHR target genes in vivo, highlight in vitro and in vivo differences in AHR signaling and show that AHR recruitment does not necessarily result in changes in target gene expression. -- Highlights: Black-Right-Pointing-Pointer ChIP-chip analysis of hepatic AHR binding after 2 h and 24 h of TCDD. Black-Right-Pointing-Pointer We identified 1642 and 508 AHR-bound regions at 2 h and 24 h. Black-Right-Pointing-Pointer 430 regions were common to both time points and highly enriched with AHREs. Black-Right-Pointing-Pointer Only 62 putative target regions overlapped AHR-bound regions in hepatoma cells. Black-Right-Pointing-Pointer Microarrays identified 133 TCDD-regulated genes; of which 39 were also bound by AHR.« less

  15. Cell-bound lipases from Burkholderia sp. ZYB002: gene sequence analysis, expression, enzymatic characterization, and 3D structural model.

    PubMed

    Shu, Zhengyu; Lin, Hong; Shi, Shaolei; Mu, Xiangduo; Liu, Yanru; Huang, Jianzhong

    2016-05-03

    The whole-cell lipase from Burkholderia cepacia has been used as a biocatalyst in organic synthesis. However, there is no report in the literature on the component or the gene sequence of the cell-bound lipase from this species. Qualitative analysis of the cell-bound lipase would help to illuminate the regulation mechanism of gene expression and further improve the yield of the cell-bound lipase by gene engineering. Three predictive cell-bound lipases, lipA, lipC21 and lipC24, from Burkholderia sp. ZYB002 were cloned and expressed in E. coli. Both LipA and LipC24 displayed the lipase activity. LipC24 was a novel mesophilic enzyme and displayed preference for medium-chain-length acyl groups (C10-C14). The 3D structural model of LipC24 revealed the open Y-type active site. LipA displayed 96 % amino acid sequence identity with the known extracellular lipase. lipA-inactivation and lipC24-inactivation decreased the total cell-bound lipase activity of Burkholderia sp. ZYB002 by 42 % and 14 %, respectively. The cell-bound lipase activity from Burkholderia sp. ZYB002 originated from a multi-enzyme mixture with LipA as the main component. LipC24 was a novel lipase and displayed different enzymatic characteristics and structural model with LipA. Besides LipA and LipC24, other type of the cell-bound lipases (or esterases) should exist.

  16. Two-Way Communication with a Single Quantum Particle.

    PubMed

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-09

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  17. Two-Way Communication with a Single Quantum Particle

    NASA Astrophysics Data System (ADS)

    Del Santo, Flavio; Dakić, Borivoje

    2018-02-01

    In this Letter we show that communication when restricted to a single information carrier (i.e., single particle) and finite speed of propagation is fundamentally limited for classical systems. On the other hand, quantum systems can surpass this limitation. We show that communication bounded to the exchange of a single quantum particle (in superposition of different spatial locations) can result in "two-way signaling," which is impossible in classical physics. We quantify the discrepancy between classical and quantum scenarios by the probability of winning a game played by distant players. We generalize our result to an arbitrary number of parties and we show that the probability of success is asymptotically decreasing to zero as the number of parties grows, for all classical strategies. In contrast, quantum strategy allows players to win the game with certainty.

  18. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.; Lin, Shu

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum-likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds derived on the symbol error probability as well as the probability of false synchronization indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  19. On the synchronizability and detectability of random PPM sequences

    NASA Technical Reports Server (NTRS)

    Georghiades, Costas N.

    1987-01-01

    The problem of synchronization and detection of random pulse-position-modulation (PPM) sequences is investigated under the assumption of perfect slot synchronization. Maximum likelihood PPM symbol synchronization and receiver algorithms are derived that make decisions based both on soft as well as hard data; these algorithms are seen to be easily implementable. Bounds were derived on the symbol error probability as well as the probability of false synchronization that indicate the existence of a rather severe performance floor, which can easily be the limiting factor in the overall system performance. The performance floor is inherent in the PPM format and random data and becomes more serious as the PPM alphabet size Q is increased. A way to eliminate the performance floor is suggested by inserting special PPM symbols in the random data stream.

  20. Quantum interval-valued probability: Contextuality and the Born rule

    NASA Astrophysics Data System (ADS)

    Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr

    2018-05-01

    We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.

  1. Reprint of: Ionization probabilities of Ne, Ar, Kr, and Xe by proton impact for different initial states and impact energies

    NASA Astrophysics Data System (ADS)

    Montanari, C. C.; Miraglia, J. E.

    2018-01-01

    In this contribution we present ab initio results for ionization total cross sections, probabilities at zero impact parameter, and impact parameter moments of order +1 and -1 of Ne, Ar, Kr, and Xe by proton impact in an extended energy range from 100 keV up to 10 MeV. The calculations were performed by using the continuum distorted wave eikonal initial state approximation (CDW-EIS) for energies up to 1 MeV, and using the first Born approximation for larger energies. The convergence of the CDW-EIS to the first Born above 1 MeV is clear in the present results. Our inner-shell ionization cross sections are compared with the available experimental data and with the ECPSSR results. We also include in this contribution the values of the ionization probabilities at the origin, and the impact parameter dependence. These values have been employed in multiple ionization calculations showing very good description of the experimental data. Tables of the ionization probabilities are presented, disaggregated for the different initial bound states, considering all the shells for Ne and Ar, the M-N shells of Kr and the N-O shells of Xe.

  2. Phase synchronization of bursting neurons in clustered small-world networks

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.

    2012-07-01

    We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.

  3. Scattering of particles in the presence of harmonic confinement perturbed by a complex absorbing potential

    NASA Astrophysics Data System (ADS)

    Maghari, A.; Kermani, M. M.

    2018-04-01

    A system of two interacting atoms confined in 1D harmonic trap and perturbed by an absorbing boundary potential is studied using the Lippmann-Schwinger formalism. The atom-atom interaction potential was considered as a nonlocal separable model. The perturbed absorbing boundary potential was also assumed in the form of Scarf II complex absorbing potential. The model is used for the study of 1D optical lattices that support the trapping of a pair atom within a unit cell. Moreover, it allows to describe the scattering particles in a tight smooth trapping surface and to analyze the bound and resonance states. The analytical expressions for wavefunctions and transition matrix as well as the absorption probabilities are calculated. A demonstration of how the complex absorbing potential affecting the bound states and resonances of particles confined in a harmonic trap is described.

  4. The origin of bounded rationality and intelligence.

    PubMed

    Lo, Andrew W

    2013-09-01

    Rational economic behavior in which individuals maximize their own self-interest is only one of many possible types of behavior that arise from natural selection. Given an initial population of individuals, each assigned a purely arbitrary behavior with respect to a binary choice problem, and assuming that offspring behave identically to their parents, only those behaviors linked to reproductive success will survive, and less successful behaviors will disappear exponentially fast. This framework yields a single evolutionary explanation for the origin of several behaviors that have been observed in organisms ranging from bacteria to humans, including risk-sensitive foraging, risk aversion, loss aversion, probability matching, randomization, and diversification. The key to understanding which types of behavior are more likely to survive is how behavior affects reproductive success in a given population's environment. From this perspective, intelligence is naturally defined as behavior that increases the likelihood of reproductive success, and bounds on rationality are determined by physiological and environmental constraints.

  5. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  6. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  7. Shear zones bounding the central zone of the Limpopo Mobile Belt, southern Africa

    NASA Astrophysics Data System (ADS)

    McCouri, Stephen; Vearncombe, Julian R.

    Contrary to previously suggested north-directed thrust emplacement of the central zone of the Limpopo mobile belt, we present evidence indicating west-directed emplacement. The central zone differs from the marginal zones in rock types, structural style and isotopic signature and is an allochthonous thrust sheet. It is bounded in the north by the dextral Tuli-Sabi shear zone and in the south by the sinistral Palala shear zone which are crustal-scale lateral ramps. Published gravity data suggest that the lateral ramps are linked at depth and they probably link at the surface, in a convex westward frontal ramp, in the vicinity of longitude 26°30'E in eastern Botswana. Two phases of movement, the first between 2.7 and 2.6 Ga and the second between 2.0 and 1.8 Ga. occurred on both the Tuli-Sabi and the Palala shear zones.

  8. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  9. A Brownian model for recurrent earthquakes

    USGS Publications Warehouse

    Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.

    2002-01-01

    We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.

  10. Structural evidence for a copper-bound carbonate intermediate in the peroxidase and dismutase activities of superoxide dismutase.

    PubMed

    Strange, Richard W; Hough, Michael A; Antonyuk, Svetlana V; Hasnain, S Samar

    2012-01-01

    Copper-zinc superoxide dismutase (SOD) is of fundamental importance to our understanding of oxidative damage. Its primary function is catalysing the dismutation of superoxide to O(2) and H(2)O(2). SOD also reacts with H(2)O(2), leading to the formation of a strong copper-bound oxidant species that can either inactivate the enzyme or oxidise other substrates. In the presence of bicarbonate (or CO(2)) and H(2)O(2), this peroxidase activity is enhanced and produces the carbonate radical. This freely diffusible reactive oxygen species is proposed as the agent for oxidation of large substrates that are too bulky to enter the active site. Here, we provide direct structural evidence, from a 2.15 Å resolution crystal structure, of (bi)carbonate captured at the active site of reduced SOD, consistent with the view that a bound carbonate intermediate could be formed, producing a diffusible carbonate radical upon reoxidation of copper. The bound carbonate blocks direct access of substrates to Cu(I), suggesting that an adjunct to the accepted mechanism of SOD catalysed dismutation of superoxide operates, with Cu(I) oxidation by superoxide being driven via a proton-coupled electron transfer mechanism involving the bound carbonate rather than the solvent. Carbonate is captured in a different site when SOD is oxidised, being located in the active site channel adjacent to the catalytically important Arg143. This is the probable route of diffusion from the active site following reoxidation of the copper. In this position, the carbonate is poised for re-entry into the active site and binding to the reduced copper.

  11. Structural Evidence for a Copper-Bound Carbonate Intermediate in the Peroxidase and Dismutase Activities of Superoxide Dismutase

    PubMed Central

    Strange, Richard W.; Hough, Michael A.; Antonyuk, Svetlana V.; Hasnain, S. Samar

    2012-01-01

    Copper-zinc superoxide dismutase (SOD) is of fundamental importance to our understanding of oxidative damage. Its primary function is catalysing the dismutation of superoxide to O2 and H2O2. SOD also reacts with H2O2, leading to the formation of a strong copper-bound oxidant species that can either inactivate the enzyme or oxidise other substrates. In the presence of bicarbonate (or CO2) and H2O2, this peroxidase activity is enhanced and produces the carbonate radical. This freely diffusible reactive oxygen species is proposed as the agent for oxidation of large substrates that are too bulky to enter the active site. Here, we provide direct structural evidence, from a 2.15 Å resolution crystal structure, of (bi)carbonate captured at the active site of reduced SOD, consistent with the view that a bound carbonate intermediate could be formed, producing a diffusible carbonate radical upon reoxidation of copper. The bound carbonate blocks direct access of substrates to Cu(I), suggesting that an adjunct to the accepted mechanism of SOD catalysed dismutation of superoxide operates, with Cu(I) oxidation by superoxide being driven via a proton-coupled electron transfer mechanism involving the bound carbonate rather than the solvent. Carbonate is captured in a different site when SOD is oxidised, being located in the active site channel adjacent to the catalytically important Arg143. This is the probable route of diffusion from the active site following reoxidation of the copper. In this position, the carbonate is poised for re-entry into the active site and binding to the reduced copper. PMID:22984565

  12. Measures and limits of models of fixation selection.

    PubMed

    Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter

    2011-01-01

    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  13. Geology of Pluto and Charon Overview

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey Morgan

    2015-01-01

    Pluto's surface was found to be remarkably diverse in terms of its range of landforms, terrain ages, and inferred geological processes. There is a latitudinal zonation of albedo. The conspicuous bright albedo heart-shaped feature informally named Tombaugh Regio is comprised of several terrain types. Most striking is Texas-sized Sputnik Planum, which is apparently level, has no observable craters, and is divided by polygons and ovoids bounded by shallow troughs. Small smooth hills are seen in some of the polygon-bounding troughs. These hills could either be extruded or exposed by erosion. Sputnik Planum polygon/ovoid formation hypotheses range from convection to contraction, but convection is currently favored. There is evidence of flow of plains material around obstacles. Mountains, especially those seen south of Sputnik Planum, exhibit too much relief to be made of CH4, CO, or N2, and thus are probably composed of H2O-ice basement material. The north contact of Sputnik Planum abuts a scarp, above which is heavily modified cratered terrain. Pluto's large moon Charon is generally heavily to moderately cratered. There is a mysterious structure in the arctic. Charon's surface is crossed by an extensive system of rift faults and graben. Some regions are smoother and less cratered, reminiscent of lunar maria. On such a plain are large isolated block mountains surrounded by moats. At this conference we will present highlights of the latest observations and analysis. This work was supported by NASA's New Horizons project

  14. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  15. Improved bounds on the energy-minimizing strains in martensitic polycrystals

    NASA Astrophysics Data System (ADS)

    Peigney, Michaël

    2016-07-01

    This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.

  16. Consequences of the trans-Atlantic slave trade on medicinal plant selection: plant use for cultural bound syndromes affecting children in Suriname and Western Africa.

    PubMed

    Vossen, Tessa; Towns, Alexandra; Ruysschaert, Sofie; Quiroz, Diana; van Andel, Tinde

    2014-01-01

    Folk perceptions of health and illness include cultural bound syndromes (CBS), ailments generally confined to certain cultural groups or geographic regions and often treated with medicinal plants. Our aim was to compare definitions and plant use for CBS regarding child health in the context of the largest migration in recent human history: the trans-Atlantic slave trade. We compared definitions of four CBS (walk early, evil eye, atita and fontanels) and associated plant use among three Afro-Surinamese populations and their African ancestor groups in Ghana, Bénin and Gabon. We expected plant use to be similar on species level, and assumed the majority to be weedy or domesticated species, as these occur on both continents and were probably recognized by enslaved Africans. Data were obtained by identifying plants mentioned during interviews with local women from the six different populations. To analyse differences and similarities in plant use we used Detrended Component Analysis (DCA) and a Wald Chi-square test. Definitions of the four cultural bound syndromes were roughly the same on both continents. In total, 324 plant species were used. There was little overlap between Suriname and Africa: 15 species were used on two continents, of which seven species were used for the same CBS. Correspondence on family level was much higher. Surinamese populations used significantly more weedy species than Africans, but equal percentages of domesticated plants. Our data indicate that Afro-Surinamers have searched for similar plants to treat their CBS as they remembered from Africa. In some cases, they have found the same species, but they had to reinvent the largest part of their herbal pharmacopeia to treat their CBS using known plant families or trying out new species. Ideas on health and illness appear to be more resilient than the use of plants to treat them.

  17. Error analysis of analytic solutions for self-excited near-symmetric rigid bodies - A numerical study

    NASA Technical Reports Server (NTRS)

    Kia, T.; Longuski, J. M.

    1984-01-01

    Analytic error bounds are presented for the solutions of approximate models for self-excited near-symmetric rigid bodies. The error bounds are developed for analytic solutions to Euler's equations of motion. The results are applied to obtain a simplified analytic solution for Eulerian rates and angles. The results of a sample application of the range and error bound expressions for the case of the Galileo spacecraft experiencing transverse torques demonstrate the use of the bounds in analyses of rigid body spin change maneuvers.

  18. Stability of proton-bound clusters of alkyl alcohols, aldehydes and ketones in Ion Mobility Spectrometry.

    PubMed

    Jurado-Campos, Natividad; Garrido-Delgado, Rocío; Martínez-Haya, Bruno; Eiceman, Gary A; Arce, Lourdes

    2018-08-01

    Significant substances in emerging applications of ion mobility spectrometry such as breath analysis for clinical diagnostics and headspace analysis for food purity include low molar mass alcohols, ketones, aldehydes and esters which produce mobility spectra containing protonated monomers and proton-bound dimers. Spectra for all n- alcohols, aldehydes and ketones from carbon number three to eight exhibited protonated monomers and proton-bound dimers with ion drift times of 6.5-13.3 ms at ambient pressure and from 35° to 80 °C in nitrogen. Only n-alcohols from 1-pentanol to 1-octanol produced proton-bound trimers which were sufficiently stable to be observed at these temperatures and drift times of 12.8-16.3 ms. Polar functional groups were protected in compact structures in ab initio models for proton-bound dimers of alcohols, ketones and aldehydes. Only alcohols formed a V-shaped arrangement for proton-bound trimers strengthening ion stability and lifetime. In contrast, models for proton-bound trimers of aldehydes and ketones showed association of the third neutral through weak, non-specific, long-range interactions consistent with ion dissociation in the ion mobility drift tube before arriving at the detector. Collision cross sections derived from reduced mobility coefficients in nitrogen gas atmosphere support the predicted ion structures and approximate degrees of hydration. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Safe, Multiphase Bounds Check Elimination in Java

    DTIC Science & Technology

    2010-01-28

    production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds

  20. MATTER IN THE BEAM: WEAK LENSING, SUBSTRUCTURES, AND THE TEMPERATURE OF DARK MATTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahdi, Hareth S.; Elahi, Pascal J.; Lewis, Geraint F.

    2016-08-01

    Warm dark matter (WDM) models offer an attractive alternative to the current cold dark matter (CDM) cosmological model. We present a novel method to differentiate between WDM and CDM cosmologies, namely, using weak lensing; this provides a unique probe as it is sensitive to all of the “matter in the beam,” not just dark matter haloes and the galaxies that reside in them, but also the diffuse material between haloes. We compare the weak lensing maps of CDM clusters to those in a WDM model corresponding to a thermally produced 0.5 keV dark matter particle. Our analysis clearly shows thatmore » the weak lensing magnification, convergence, and shear distributions can be used to distinguish between CDM and WDM models. WDM models increase the probability of weak magnifications, with the differences being significant to ≳5 σ , while leaving no significant imprint on the shear distribution. WDM clusters analyzed in this work are more homogeneous than CDM ones, and the fractional decrease in the amount of material in haloes is proportional to the average increase in the magnification. This difference arises from matter that would be bound in compact haloes in CDM being smoothly distributed over much larger volumes at lower densities in WDM. Moreover, the signature does not solely lie in the probability distribution function but in the full spatial distribution of the convergence field.« less

Top