Sample records for probability measure based

  1. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  2. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  3. A Web-based interface to calculate phonotactic probability for words and nonwords in English

    PubMed Central

    VITEVITCH, MICHAEL S.; LUCE, PAUL A.

    2008-01-01

    Phonotactic probability refers to the frequency with which phonological segments and sequences of phonological segments occur in words in a given language. We describe one method of estimating phonotactic probabilities based on words in American English. These estimates of phonotactic probability have been used in a number of previous studies and are now being made available to other researchers via a Web-based interface. Instructions for using the interface, as well as details regarding how the measures were derived, are provided in the present article. The Phonotactic Probability Calculator can be accessed at http://www.people.ku.edu/~mvitevit/PhonoProbHome.html. PMID:15641436

  4. Meaner king uses biased bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-15

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases aremore » mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell's original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.« less

  5. Meaner king uses biased bases

    NASA Astrophysics Data System (ADS)

    Reimpell, Michael; Werner, Reinhard F.

    2007-06-01

    The mean king problem is a quantum mechanical retrodiction problem, in which Alice has to name the outcome of an ideal measurement made in one of several different orthonormal bases. Alice is allowed to prepare the state of the system and to do a final measurement, possibly including an entangled copy. However, Alice gains knowledge about which basis was measured only after she no longer has access to the quantum system or its copy. We give a necessary and sufficient condition on the bases, for Alice to have a strategy to solve this problem, without assuming that the bases are mutually unbiased. The condition requires the existence of an overall joint probability distribution for random variables, whose marginal pair distributions are fixed as the transition probability matrices of the given bases. In particular, in the qubit case the problem is decided by Bell’s original three variable inequality. In the standard setting of mutually unbiased bases, when they do exist, Alice can always succeed. However, for randomly chosen bases her success probability rapidly goes to zero with increasing dimension.

  6. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  8. Gap probability - Measurements and models of a pecan orchard

    NASA Technical Reports Server (NTRS)

    Strahler, Alan H.; Li, Xiaowen; Moody, Aaron; Liu, YI

    1992-01-01

    Measurements and models are compared for gap probability in a pecan orchard. Measurements are based on panoramic photographs of 50* by 135 view angle made under the canopy looking upwards at regular positions along transects between orchard trees. The gap probability model is driven by geometric parameters at two levels-crown and leaf. Crown level parameters include the shape of the crown envelope and spacing of crowns; leaf level parameters include leaf size and shape, leaf area index, and leaf angle, all as functions of canopy position.

  9. Calibrating perceived understanding and competency in probability concepts: A diagnosis of learning difficulties based on Rasch probabilistic model

    NASA Astrophysics Data System (ADS)

    Mahmud, Zamalia; Porter, Anne; Salikin, Masniyati; Ghani, Nor Azura Md

    2015-12-01

    Students' understanding of probability concepts have been investigated from various different perspectives. Competency on the other hand is often measured separately in the form of test structure. This study was set out to show that perceived understanding and competency can be calibrated and assessed together using Rasch measurement tools. Forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW have volunteered to participate in the study. Rasch measurement which is based on a probabilistic model is used to calibrate the responses from two survey instruments and investigate the interactions between them. Data were captured from the e-learning platform Moodle where students provided their responses through an online quiz. The study shows that majority of the students perceived little understanding about conditional and independent events prior to learning about it but tend to demonstrate a slightly higher competency level afterward. Based on the Rasch map, there is indication of some increase in learning and knowledge about some probability concepts at the end of the two weeks lessons on probability concepts.

  10. Transitional probability-based model for HPV clearance in HIV-1-positive adolescent females.

    PubMed

    Kravchenko, Julia; Akushevich, Igor; Sudenga, Staci L; Wilson, Craig M; Levitan, Emily B; Shrestha, Sadeep

    2012-01-01

    HIV-1-positive patients clear the human papillomavirus (HPV) infection less frequently than HIV-1-negative. Datasets for estimating HPV clearance probability often have irregular measurements of HPV status and risk factors. A new transitional probability-based model for estimation of probability of HPV clearance was developed to fully incorporate information on HIV-1-related clinical data, such as CD4 counts, HIV-1 viral load (VL), highly active antiretroviral therapy (HAART), and risk factors (measured quarterly), and HPV infection status (measured at 6-month intervals). Data from 266 HIV-1-positive and 134 at-risk HIV-1-negative adolescent females from the Reaching for Excellence in Adolescent Care and Health (REACH) cohort were used in this study. First, the associations were evaluated using the Cox proportional hazard model, and the variables that demonstrated significant effects on HPV clearance were included in transitional probability models. The new model established the efficacy of CD4 cell counts as a main clearance predictor for all type-specific HPV phylogenetic groups. The 3-month probability of HPV clearance in HIV-1-infected patients significantly increased with increasing CD4 counts for HPV16/16-like (p<0.001), HPV18/18-like (p<0.001), HPV56/56-like (p = 0.05), and low-risk HPV (p<0.001) phylogenetic groups, with the lowest probability found for HPV16/16-like infections (21.60±1.81% at CD4 level 200 cells/mm(3), p<0.05; and 28.03±1.47% at CD4 level 500 cells/mm(3)). HIV-1 VL was a significant predictor for clearance of low-risk HPV infections (p<0.05). HAART (with protease inhibitor) was significant predictor of probability of HPV16 clearance (p<0.05). HPV16/16-like and HPV18/18-like groups showed heterogeneity (p<0.05) in terms of how CD4 counts, HIV VL, and HAART affected probability of clearance of each HPV infection. This new model predicts the 3-month probability of HPV infection clearance based on CD4 cell counts and other HIV-1-related clinical measurements.

  11. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  12. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  13. Exploiting target amplitude information to improve multi-target tracking

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Blair, W. Dale

    2006-05-01

    Closely-spaced (but resolved) targets pose a challenge for measurement-to-track data association algorithms. Since the Mahalanobis distances between measurements collected on closely-spaced targets and tracks are similar, several elements of the corresponding kinematic measurement-to-track cost matrix are also similar. Lacking any other information on which to base assignments, it is not surprising that data association algorithms make mistakes. One ad hoc approach for mitigating this problem is to multiply the kinematic measurement-to-track likelihoods by amplitude likelihoods. However, this can actually be detrimental to the measurement-to-track association process. With that in mind, this paper pursues a rigorous treatment of the hypothesis probabilities for kinematic measurements and features. Three simple scenarios are used to demonstrate the impact of basing data association decisions on these hypothesis probabilities for Rayleigh, fixed-amplitude, and Rician targets. The first scenario assumes that the tracker carries two tracks but only one measurement is collected. This provides insight into more complex scenarios in which there are fewer measurements than tracks. The second scenario includes two measurements and one track. This extends naturally to the case with more measurements than tracks. Two measurements and two tracks are present in the third scenario, which provides insight into the performance of this method when the number of measurements equals the number of tracks. In all cases, basing data association decisions on the hypothesis probabilities leads to good results.

  14. [WebSurvCa: web-based estimation of death and survival probabilities in a cohort].

    PubMed

    Clèries, Ramon; Ameijide, Alberto; Buxó, Maria; Vilardell, Mireia; Martínez, José Miguel; Alarcón, Francisco; Cordero, David; Díez-Villanueva, Ana; Yasui, Yutaka; Marcos-Gragera, Rafael; Vilardell, Maria Loreto; Carulla, Marià; Galceran, Jaume; Izquierdo, Ángel; Moreno, Víctor; Borràs, Josep M

    2018-01-19

    Relative survival has been used as a measure of the temporal evolution of the excess risk of death of a cohort of patients diagnosed with cancer, taking into account the mortality of a reference population. Once the excess risk of death has been estimated, three probabilities can be computed at time T: 1) the crude probability of death associated with the cause of initial diagnosis (disease under study), 2) the crude probability of death associated with other causes, and 3) the probability of absolute survival in the cohort at time T. This paper presents the WebSurvCa application (https://shiny.snpstats.net/WebSurvCa/), whereby hospital-based and population-based cancer registries and registries of other diseases can estimate such probabilities in their cohorts by selecting the mortality of the relevant region (reference population). Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  15. MEASUREMENT OF CHILDREN'S EXPOSURE TO PESTICIDES: ANALYSIS OF URINARY METABOLITE LEVELS IN A PROBABILITY-BASED SAMPLE

    EPA Science Inventory

    The Minnesota Children's Pesticide Exposure Study is a probability-based sample of 102 children 3-13 years old who were monitored for commonly used pesticides. During the summer of 1997, first-morning-void urine samples (1-3 per child) were obtained for 88% of study children a...

  16. MEASUREMENT OF MULTI-POLLUTANT AND MULTI-PATHWAY EXPOSURES IN A PROBABILITY-BASED SAMPLE OF CHILDREN: PRACTICAL STRATEGIES FOR EFFECTIVE FIELD STUDIES

    EPA Science Inventory

    The purpose of this manuscript is to describe the practical strategies developed for the implementation of the Minnesota Children's Pesticide Exposure Study (MNCPES), which is one of the first probability-based samples of multi-pathway and multi-pesticide exposures in children....

  17. A quantile-based Time at Risk: A new approach for assessing risk in financial markets

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam; Raei, Reza

    2013-11-01

    In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.

  18. Quantum probabilistic logic programming

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan

    2015-05-01

    We describe a quantum mechanics based logic programming language that supports Horn clauses, random variables, and covariance matrices to express and solve problems in probabilistic logic. The Horn clauses of the language wrap random variables, including infinite valued, to express probability distributions and statistical correlations, a powerful feature to capture relationship between distributions that are not independent. The expressive power of the language is based on a mechanism to implement statistical ensembles and to solve the underlying SAT instances using quantum mechanical machinery. We exploit the fact that classical random variables have quantum decompositions to build the Horn clauses. We establish the semantics of the language in a rigorous fashion by considering an existing probabilistic logic language called PRISM with classical probability measures defined on the Herbrand base and extending it to the quantum context. In the classical case H-interpretations form the sample space and probability measures defined on them lead to consistent definition of probabilities for well formed formulae. In the quantum counterpart, we define probability amplitudes on Hinterpretations facilitating the model generations and verifications via quantum mechanical superpositions and entanglements. We cast the well formed formulae of the language as quantum mechanical observables thus providing an elegant interpretation for their probabilities. We discuss several examples to combine statistical ensembles and predicates of first order logic to reason with situations involving uncertainty.

  19. DNA binding site characterization by means of Rényi entropy measures on nucleotide transitions.

    PubMed

    Perera, A; Vallverdu, M; Claria, F; Soria, J M; Caminal, P

    2008-06-01

    In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measures such as Rényi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency-based Rényi measures. Results are reported in this work comparing transition frequencies (i.e., dinucleotides) and base frequencies for Shannon and parametric Rényi entropies for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that the information provided by both approaches is not redundant. Furthermore, under the presence of noise in the binding site matrix we observe overall improved robustness of nucleotide transition-based algorithms when compared with nucleotide frequency-based method.

  20. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  1. Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar

    NASA Astrophysics Data System (ADS)

    Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio

    2018-03-01

    This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.

  2. Estimating parameters for probabilistic linkage of privacy-preserved datasets.

    PubMed

    Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H

    2017-07-10

    Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.

  3. Public attitudes toward stuttering in Turkey: probability versus convenience sampling.

    PubMed

    Ozdemir, R Sertan; St Louis, Kenneth O; Topbaş, Seyhun

    2011-12-01

    A Turkish translation of the Public Opinion Survey of Human Attributes-Stuttering (POSHA-S) was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. A convenience sample of adults in Eskişehir, Turkey was compared with two replicates of a school-based, probability cluster sampling scheme. The two replicates of the probability sampling scheme yielded similar demographic samples, both of which were different from the convenience sample. Components of subscores on the POSHA-S were significantly different in more than half of the comparisons between convenience and probability samples, indicating important differences in public attitudes. If POSHA-S users intend to generalize to specific geographic areas, results of this study indicate that probability sampling is a better research strategy than convenience sampling. The reader will be able to: (1) discuss the difference between convenience sampling and probability sampling; (2) describe a school-based probability sampling scheme; and (3) describe differences in POSHA-S results from convenience sampling versus probability sampling. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. A method for modeling bias in a person's estimates of likelihoods of events

    NASA Technical Reports Server (NTRS)

    Nygren, Thomas E.; Morera, Osvaldo

    1988-01-01

    It is of practical importance in decision situations involving risk to train individuals to transform uncertainties into subjective probability estimates that are both accurate and unbiased. We have found that in decision situations involving risk, people often introduce subjective bias in their estimation of the likelihoods of events depending on whether the possible outcomes are perceived as being good or bad. Until now, however, the successful measurement of individual differences in the magnitude of such biases has not been attempted. In this paper we illustrate a modification of a procedure originally outlined by Davidson, Suppes, and Siegel (3) to allow for a quantitatively-based methodology for simultaneously estimating an individual's subjective utility and subjective probability functions. The procedure is now an interactive computer-based algorithm, DSS, that allows for the measurement of biases in probability estimation by obtaining independent measures of two subjective probability functions (S+ and S-) for winning (i.e., good outcomes) and for losing (i.e., bad outcomes) respectively for each individual, and for different experimental conditions within individuals. The algorithm and some recent empirical data are described.

  5. Computer-aided mathematical analysis of probability of intercept for ground-based communication intercept system

    NASA Astrophysics Data System (ADS)

    Park, Sang Chul

    1989-09-01

    We develop a mathematical analysis model to calculate the probability of intercept (POI) for the ground-based communication intercept (COMINT) system. The POI is a measure of the effectiveness of the intercept system. We define the POI as the product of the probability of detection and the probability of coincidence. The probability of detection is a measure of the receiver's capability to detect a signal in the presence of noise. The probability of coincidence is the probability that an intercept system is available, actively listening in the proper frequency band, in the right direction and at the same time that the signal is received. We investigate the behavior of the POI with respect to the observation time, the separation distance, antenna elevations, the frequency of the signal, and the receiver bandwidths. We observe that the coincidence characteristic between the receiver scanning parameters and the signal parameters is the key factor to determine the time to obtain a given POI. This model can be used to find the optimal parameter combination to maximize the POI in a given scenario. We expand this model to a multiple system. This analysis is conducted on a personal computer to provide the portability. The model is also flexible and can be easily implemented under different situations.

  6. Computing rates of Markov models of voltage-gated ion channels by inverting partial differential equations governing the probability density functions of the conducting and non-conducting states.

    PubMed

    Tveito, Aslak; Lines, Glenn T; Edwards, Andrew G; McCulloch, Andrew

    2016-07-01

    Markov models are ubiquitously used to represent the function of single ion channels. However, solving the inverse problem to construct a Markov model of single channel dynamics from bilayer or patch-clamp recordings remains challenging, particularly for channels involving complex gating processes. Methods for solving the inverse problem are generally based on data from voltage clamp measurements. Here, we describe an alternative approach to this problem based on measurements of voltage traces. The voltage traces define probability density functions of the functional states of an ion channel. These probability density functions can also be computed by solving a deterministic system of partial differential equations. The inversion is based on tuning the rates of the Markov models used in the deterministic system of partial differential equations such that the solution mimics the properties of the probability density function gathered from (pseudo) experimental data as well as possible. The optimization is done by defining a cost function to measure the difference between the deterministic solution and the solution based on experimental data. By evoking the properties of this function, it is possible to infer whether the rates of the Markov model are identifiable by our method. We present applications to Markov model well-known from the literature. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Asymptotic Equivalence of Probability Measures and Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Touchette, Hugo

    2018-03-01

    Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.

  8. Assessing agreement with relative area under the coverage probability curve.

    PubMed

    Barnhart, Huiman X

    2016-08-15

    There has been substantial statistical literature in the last several decades on assessing agreement, and coverage probability approach was selected as a preferred index for assessing and improving measurement agreement in a core laboratory setting. With this approach, a satisfactory agreement is based on pre-specified high satisfactory coverage probability (e.g., 95%), given one pre-specified acceptable difference. In practice, we may want to have quality control on more than one pre-specified differences, or we may simply want to summarize the agreement based on differences up to a maximum acceptable difference. We propose to assess agreement via the coverage probability curve that provides a full spectrum of measurement error at various differences/disagreement. Relative area under the coverage probability curve is proposed for the summary of overall agreement, and this new summary index can be used for comparison of different intra-methods or inter-methods/labs/observers' agreement. Simulation studies and a blood pressure example are used for illustration of the methodology. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Probability density and exceedance rate functions of locally Gaussian turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1989-01-01

    A locally Gaussian model of turbulence velocities is postulated which consists of the superposition of a slowly varying strictly Gaussian component representing slow temporal changes in the mean wind speed and a more rapidly varying locally Gaussian turbulence component possessing a temporally fluctuating local variance. Series expansions of the probability density and exceedance rate functions of the turbulence velocity model, based on Taylor's series, are derived. Comparisons of the resulting two-term approximations with measured probability density and exceedance rate functions of atmospheric turbulence velocity records show encouraging agreement, thereby confirming the consistency of the measured records with the locally Gaussian model. Explicit formulas are derived for computing all required expansion coefficients from measured turbulence records.

  10. DNA binding sites characterization by means of Rényi entropy measures on nucleotide transitions.

    PubMed

    Perera, Alexandre; Vallverdu, Montserrat; Claria, Francesc; Soria, José Manuel; Caminal, Pere

    2006-01-01

    In this work, parametric information-theory measures for the characterization of binding sites in DNA are extended with the use of transitional probabilities on the sequence. We propose the use of parametric uncertainty measure such as Renyi entropies obtained from the transition probabilities for the study of the binding sites, in addition to nucleotide frequency based Renyi measures. Results are reported in this manuscript comparing transition frequencies (i.e. dinucelotides) and base frequencies for Shannon and parametric Renyi for a number of binding sites found in E. Coli, lambda and T7 organisms. We observe that, for the evaluated datasets, the information provided by both approaches is not redundant, as they evolve differently under increasing Renyi orders.

  11. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  12. Simplified tools for measuring retention in care in antiretroviral treatment program in Ethiopia: cohort and current retention in care.

    PubMed

    Assefa, Yibeltal; Worku, Alemayehu; Wouters, Edwin; Koole, Olivier; Haile Mariam, Damen; Van Damme, Wim

    2012-01-01

    Patient retention in care is a critical challenge for antiretroviral treatment programs. This is mainly because retention in care is related to adherence to treatment and patient survival. It is therefore imperative that health facilities and programs measure patient retention in care. However, the currently available tools, such as Kaplan Meier, for measuring retention in care have a lot of practical limitations. The objective of this study was to develop simplified tools for measuring retention in care. Retrospective cohort data were collected from patient registers in nine health facilities in Ethiopia. Retention in care was the primary outcome for the study. Tools were developed to measure "current retention" in care during a specific period of time for a specific "ART-age group" and "cohort retention" in care among patients who were followed for the last "Y" number of years on ART. "Probability of retention" based on the tool for "cohort retention" in care was compared with "probability of retention" based on Kaplan Meier. We found that the new tools enable to measure "current retention" and "cohort retention" in care. We also found that the tools were easy to use and did not require advanced statistical skills. Both "current retention" and "cohort retention" are lower among patients in the first two "ART-age groups" and "ART-age cohorts" than in subsequent "ART-age groups" and "ART-age cohorts". The "probability of retention" based on the new tools were found to be similar to the "probability of retention" based on Kaplan Meier. The simplified tools for "current retention" and "cohort retention" will enable practitioners and program managers to measure and monitor rates of retention in care easily and appropriately. We therefore recommend that health facilities and programs start to use these tools in their efforts to improve retention in care and patient outcomes.

  13. Intensity information extraction in Geiger mode detector array based three-dimensional imaging applications

    NASA Astrophysics Data System (ADS)

    Wang, Fei

    2013-09-01

    Geiger-mode detectors have single photon sensitivity and picoseconds timing resolution, which make it a good candidate for low light level ranging applications, especially in the case of flash three dimensional imaging applications where the received laser power is extremely limited. Another advantage of Geiger-mode APD is their capability of large output current which can drive CMOS timing circuit directly, which means that larger format focal plane arrays can be easily fabricated using the mature CMOS technology. However Geiger-mode detector based FPAs can only measure the range information of a scene but not the reflectivity. Reflectivity is a major characteristic which can help target classification and identification. According to Poisson statistic nature, detection probability is tightly connected to the incident number of photon. Employing this relation, a signal intensity estimation method based on probability inversion is proposed. Instead of measuring intensity directly, several detections are conducted, then the detection probability is obtained and the intensity is estimated using this method. The relation between the estimator's accuracy, measuring range and number of detections are discussed based on statistical theory. Finally Monte-Carlo simulation is conducted to verify the correctness of this theory. Using 100 times of detection, signal intensity equal to 4.6 photons per detection can be measured using this method. With slight modification of measuring strategy, intensity information can be obtained using current Geiger-mode detector based FPAs, which can enrich the information acquired and broaden the application field of current technology.

  14. Newsvendor problem under complete uncertainty: a case of innovative products.

    PubMed

    Gaspars-Wieloch, Helena

    2017-01-01

    The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.

  15. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  16. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.

    PubMed

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-12-01

    To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.

  17. Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula

    NASA Astrophysics Data System (ADS)

    Kacker, Raghu N.

    2006-02-01

    In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuevas, F.A.; Curilef, S., E-mail: scurilef@ucn.cl; Plastino, A.R., E-mail: arplastino@ugr.es

    The spread of a wave-packet (or its deformation) is a very important topic in quantum mechanics. Understanding this phenomenon is relevant in connection with the study of diverse physical systems. In this paper we apply various 'spreading measures' to characterize the evolution of an initially localized wave-packet in a tight-binding lattice, with special emphasis on information-theoretical measures. We investigate the behavior of both the probability distribution associated with the wave packet and the concomitant probability current. Complexity measures based upon Renyi entropies appear to be particularly good descriptors of the details of the delocalization process. - Highlights: > Spread ofmore » highly localized wave-packet in the tight-binding lattice. > Entropic and information-theoretical characterization is used to understand the delocalization. > The behavior of both the probability distribution and the concomitant probability current is investigated. > Renyi entropies appear to be good descriptors of the details of the delocalization process.« less

  19. Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification

    PubMed Central

    Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang

    2016-01-01

    Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975

  20. Systematic sampling for suspended sediment

    Treesearch

    Robert B. Thomas

    1991-01-01

    Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...

  1. Optimum measurement for unambiguously discriminating two mixed states: General considerations and special cases

    NASA Astrophysics Data System (ADS)

    Herzog, Ulrike; Bergou, János A.

    2006-04-01

    Based on our previous publication [U. Herzog and J. A. Bergou, Phys. Rev. A 71, 050301(R)(2005)] we investigate the optimum measurement for the unambiguous discrimination of two mixed quantum states that occur with given prior probabilities. Unambiguous discrimination of nonorthogonal states is possible in a probabilistic way, at the expense of a nonzero probability of inconclusive results, where the measurement fails. Along with a discussion of the general problem, we give an example illustrating our method of solution. We also provide general inequalities for the minimum achievable failure probability and discuss in more detail the necessary conditions that must be fulfilled when its absolute lower bound, proportional to the fidelity of the states, can be reached.

  2. Public Attitudes toward Stuttering in Turkey: Probability versus Convenience Sampling

    ERIC Educational Resources Information Center

    Ozdemir, R. Sertan; St. Louis, Kenneth O.; Topbas, Seyhun

    2011-01-01

    Purpose: A Turkish translation of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. Method: A convenience sample of adults in Eskisehir, Turkey was compared with two replicates of a school-based,…

  3. Inconclusive quantum measurements and decisions under uncertainty

    NASA Astrophysics Data System (ADS)

    Yukalov, Vyacheslav; Sornette, Didier

    2016-04-01

    We give a mathematical definition for the notion of inconclusive quantum measurements. In physics, such measurements occur at intermediate stages of a complex measurement procedure, with the final measurement result being operationally testable. Since the mathematical structure of Quantum Decision Theory has been developed in analogy with the theory of quantum measurements, the inconclusive quantum measurements correspond, in Quantum Decision Theory, to intermediate stages of decision making in the process of taking decisions under uncertainty. The general form of the quantum probability for a composite event is the sum of a utility factor, describing a rational evaluation of the considered prospect, and of an attraction factor, characterizing irrational, subconscious attitudes of the decision maker. Despite the involved irrationality, the probability of prospects can be evaluated. This is equivalent to the possibility of calculating quantum probabilities without specifying hidden variables. We formulate a general way of evaluation, based on the use of non-informative priors. As an example, we suggest the explanation of the decoy effect. Our quantitative predictions are in very good agreement with experimental data.

  4. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    NASA Astrophysics Data System (ADS)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  5. Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls

    PubMed Central

    Chae, Jeongsook; Jin, Yong; Sung, Yunsick

    2018-01-01

    Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641

  6. Improved log(gf ) Values of Selected Lines in Mn I and Mn II for Abundance Determinations in FGK Dwarfs and Giants

    NASA Astrophysics Data System (ADS)

    Den Hartog, E. A.; Lawler, J. E.; Sobeck, J. S.; Sneden, C.; Cowan, J. J.

    2011-06-01

    The goal of the present work is to produce transition probabilities with very low uncertainties for a selected set of multiplets of Mn I and Mn II. Multiplets are chosen based upon their suitability for stellar abundance analysis. We report on new radiative lifetime measurements for 22 levels of Mn I from the e 8 D, z 6 P, z 6 D, z 4 F, e 8 S, and e 6 S terms and six levels of Mn II from the z 5 P and z 7 P terms using time-resolved laser-induced fluorescence on a slow atom/ion beam. New branching fractions for transitions from these levels, measured using a Fourier-transform spectrometer, are reported. When combined, these measurements yield transition probabilities for 47 transitions of Mn I and 15 transitions of Mn II. Comparisons are made to data from the literature and to Russell-Saunders (LS) theory. In keeping with the goal of producing a set of transition probabilities with the highest possible accuracy and precision, we recommend a weighted mean result incorporating our measurements on Mn I and II as well as independent measurements or calculations that we view as reliable and of a quality similar to ours. In a forthcoming paper, these Mn I/II transition probability data will be utilized to derive the Mn abundance in stars with spectra from both space-based and ground-based facilities over a 4000 Å wavelength range. With the employment of a local thermodynamic equilibrium line transfer code, the Mn I/II ionization balance will be determined for stars of different evolutionary states.

  7. A probability metric for identifying high-performing facilities: an application for pay-for-performance programs.

    PubMed

    Shwartz, Michael; Peköz, Erol A; Burgess, James F; Christiansen, Cindy L; Rosen, Amy K; Berlowitz, Dan

    2014-12-01

    Two approaches are commonly used for identifying high-performing facilities on a performance measure: one, that the facility is in a top quantile (eg, quintile or quartile); and two, that a confidence interval is below (or above) the average of the measure for all facilities. This type of yes/no designation often does not do well in distinguishing high-performing from average-performing facilities. To illustrate an alternative continuous-valued metric for profiling facilities--the probability a facility is in a top quantile--and show the implications of using this metric for profiling and pay-for-performance. We created a composite measure of quality from fiscal year 2007 data based on 28 quality indicators from 112 Veterans Health Administration nursing homes. A Bayesian hierarchical multivariate normal-binomial model was used to estimate shrunken rates of the 28 quality indicators, which were combined into a composite measure using opportunity-based weights. Rates were estimated using Markov Chain Monte Carlo methods as implemented in WinBUGS. The probability metric was calculated from the simulation replications. Our probability metric allowed better discrimination of high performers than the point or interval estimate of the composite score. In a pay-for-performance program, a smaller top quantile (eg, a quintile) resulted in more resources being allocated to the highest performers, whereas a larger top quantile (eg, being above the median) distinguished less among high performers and allocated more resources to average performers. The probability metric has potential but needs to be evaluated by stakeholders in different types of delivery systems.

  8. A probability-based multi-cycle sorting method for 4D-MRI: A simulation study

    PubMed Central

    Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing

    2016-01-01

    Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178

  9. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  10. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  11. Research on quantitative relationship between NIIRS and the probabilities of discrimination

    NASA Astrophysics Data System (ADS)

    Bai, Honggang

    2011-08-01

    There are a large number of electro-optical (EO) and infrared (IR) sensors used on military platforms including ground vehicle, low altitude air vehicle, high altitude air vehicle, and satellite systems. Ground vehicle and low-altitude air vehicle (rotary and fixed-wing aircraft) sensors typically use the probabilities of discrimination (detection, recognition, and identification) as design requirements and system performance indicators. High-altitude air vehicles and satellite sensors have traditionally used the National Imagery Interpretation Rating Scale (NIIRS) performance measures for guidance in design and measures of system performance. Recently, there has a large effort to make strategic sensor information available to tactical forces or make the information of targets acquisition can be used by strategic systems. In this paper, the two techniques about the probabilities of discrimination and NIIRS for sensor design are presented separately. For the typical infrared remote sensor design parameters, the function of the probability of recognition and NIIRS scale as the distance R is given to Standard NATO Target and M1Abrams two different size targets based on the algorithm of predicting the field performance and NIIRS. For Standard NATO Target, M1Abrams, F-15, and B-52 four different size targets, the conversion from NIIRS to the probabilities of discrimination are derived and calculated, and the similarities and differences between NIIRS and the probabilities of discrimination are analyzed based on the result of calculation. Comparisons with preliminary calculation results show that the conversion between NIIRS and the probabilities of discrimination is probable although more validation experiments are needed.

  12. Prospect evaluation as a function of numeracy and probability denominator.

    PubMed

    Millroth, Philip; Juslin, Peter

    2015-05-01

    This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Gambling, Delay, and Probability Discounting in Adults With and Without ADHD.

    PubMed

    Dai, Zhijie; Harrow, Sarah-Eve; Song, Xianwen; Rucklidge, Julia J; Grace, Randolph C

    2016-11-01

    We investigated the relationship between impulsivity, as measured by delay and probability discounting, and gambling-related cognitions and behavior in adults with and without ADHD. Adults who met Diagnostic and Statistical Manual of Mental Disorders (4th ed.; DSM-IV) diagnostic criteria for ADHD (n = 31) and controls (n = 29) were recruited from the community. All completed an interview that included an assessment of psychiatric disorders, gambling questionnaires, and simulated gambling, delay, and probability discounting tasks. The ADHD group was more likely to meet the criteria for problem gambling and was more impulsive than controls based on a composite discounting measure. ADHD symptoms were correlated with gambling-related cognitions and behavior. Probability, but not delay discounting, explained significant variance in gambling-related measures after controlling for ADHD symptoms. Results confirm an association between adult ADHD and gambling, and suggest that the facets of impulsivity related to risk proneness may be an independent risk factor for problem gambling in this population. © The Author(s) 2013.

  14. Relationship between Adolescent Risk Preferences on a Laboratory Task and Behavioral Measures of Risk-taking

    PubMed Central

    Rao, Uma; Sidhartha, Tanuj; Harker, Karen R.; Bidesi, Anup S.; Chen, Li-Ann; Ernst, Monique

    2010-01-01

    Purpose The goal of the study was to assess individual differences in risk-taking behavior among adolescents in the laboratory. A second aim was to evaluate whether the laboratory-based risk-taking behavior is associated with other behavioral and psychological measures associated with risk-taking behavior. Methods Eighty-two adolescents with no personal history of psychiatric disorder completed a computerized decision-making task, the Wheel of Fortune (WOF). By offering choices between clearly defined probabilities and real monetary outcomes, this task assesses risk preferences when participants are confronted with potential rewards and losses. The participants also completed a variety of behavioral and psychological measures associated with risk-taking behavior. Results Performance on the task varied based on the probability and anticipated outcomes. In the winning sub-task, participants selected low probability-high magnitude reward (high-risk choice) less frequently than high probability-low magnitude reward (low-risk choice). In the losing sub-task, participants selected low probability-high magnitude loss more often than high probability-low magnitude loss. On average, the selection of probabilistic rewards was optimal and similar to performance in adults. There were, however, individual differences in performance, and one-third of the adolescents made high-risk choice more frequently than low-risk choice while selecting a reward. After controlling for sociodemographic and psychological variables, high-risk choice on the winning task predicted “real-world” risk-taking behavior and substance-related problems. Conclusions These findings highlight individual differences in risk-taking behavior. Preliminary data on face validity of the WOF task suggest that it might be a valuable laboratory tool for studying behavioral and neurobiological processes associated with risk-taking behavior in adolescents. PMID:21257113

  15. Precise Determination of the Intensity of 226Ra Alpha Decay to the 186 keV Excited State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.P. LaMont; R.J. Gehrke; S.E. Glover

    There is a significant discrepancy in the reported values for the emission probability of the 186 keV gamma-ray resulting from the alpha decay of 226 Ra to 186 keV excited state of 222 Rn. Published values fall in the range of 3.28 to 3.59 gamma-rays per 100 alpha-decays. An interesting observation is that the lower value, 3.28, is based on measuring the 186 keV gamma-ray intensity relative to the 226 Ra alpha-branch to the 186 keV level. The higher values, which are close to 3.59, are based on measuring the gamma-ray intensity from mass standards of 226 Ra that aremore » traceable to the mass standards prepared by HÓNIGSCHMID in the early 1930''s. This discrepancy was resolved in this work by carefully measuring the 226 Ra alpha-branch intensities, then applying the theoretical E2 multipolarity internal conversion coefficient of 0.692±0.007 to calculate the 186 keV gamma-ray emission probability. The measured value for the alpha branch to the 186 keV excited state was (6.16±0.03)%, which gives a 186 keV gamma-ray emission probability of (3.64±0.04)%. This value is in excellent agreement with the most recently reported 186 keV gamma-ray emission probabilities determined using 226 Ra mass standards.« less

  16. Computing exact bundle compliance control charts via probability generating functions.

    PubMed

    Chen, Binchao; Matis, Timothy; Benneyan, James

    2016-06-01

    Compliance to evidenced-base practices, individually and in 'bundles', remains an important focus of healthcare quality improvement for many clinical conditions. The exact probability distribution of composite bundle compliance measures used to develop corresponding control charts and other statistical tests is based on a fairly large convolution whose direct calculation can be computationally prohibitive. Various series expansions and other approximation approaches have been proposed, each with computational and accuracy tradeoffs, especially in the tails. This same probability distribution also arises in other important healthcare applications, such as for risk-adjusted outcomes and bed demand prediction, with the same computational difficulties. As an alternative, we use probability generating functions to rapidly obtain exact results and illustrate the improved accuracy and detection over other methods. Numerical testing across a wide range of applications demonstrates the computational efficiency and accuracy of this approach.

  17. Measuring survival time: a probability-based approach useful in healthcare decision-making.

    PubMed

    2011-01-01

    In some clinical situations, the choice between treatment options takes into account their impact on patient survival time. Due to practical constraints (such as loss to follow-up), survival time is usually estimated using a probability calculation based on data obtained in clinical studies or trials. The two techniques most commonly used to estimate survival times are the Kaplan-Meier method and the actuarial method. Despite their limitations, they provide useful information when choosing between treatment options.

  18. Individual Differences in Base Rate Neglect: A Fuzzy Processing Preference Index

    PubMed Central

    Wolfe, Christopher R.; Fisher, Christopher R.

    2013-01-01

    Little is known about individual differences in integrating numeric base-rates and qualitative text in making probability judgments. Fuzzy-Trace Theory predicts a preference for fuzzy processing. We conducted six studies to develop the FPPI, a reliable and valid instrument assessing individual differences in this fuzzy processing preference. It consists of 19 probability estimation items plus 4 "M-Scale" items that distinguish simple pattern matching from “base rate respect.” Cronbach's Alpha was consistently above 0.90. Validity is suggested by significant correlations between FPPI scores and three other measurers: "Rule Based" Process Dissociation Procedure scores; the number of conjunction fallacies in joint probability estimation; and logic index scores on syllogistic reasoning. Replicating norms collected in a university study with a web-based study produced negligible differences in FPPI scores, indicating robustness. The predicted relationships between individual differences in base rate respect and both conjunction fallacies and syllogistic reasoning were partially replicated in two web-based studies. PMID:23935255

  19. Probabilistic safety analysis of earth retaining structures during earthquakes

    NASA Astrophysics Data System (ADS)

    Grivas, D. A.; Souflis, C.

    1982-07-01

    A procedure is presented for determining the probability of failure of Earth retaining structures under static or seismic conditions. Four possible modes of failure (overturning, base sliding, bearing capacity, and overall sliding) are examined and their combined effect is evaluated with the aid of combinatorial analysis. The probability of failure is shown to be a more adequate measure of safety than the customary factor of safety. As Earth retaining structures may fail in four distinct modes, a system analysis can provide a single estimate for the possibility of failure. A Bayesian formulation of the safety retaining walls is found to provide an improved measure for the predicted probability of failure under seismic loading. The presented Bayesian analysis can account for the damage incurred to a retaining wall during an earthquake to provide an improved estimate for its probability of failure during future seismic events.

  20. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Quantum supremacy in constant-time measurement-based computation: A unified architecture for sampling and verification

    NASA Astrophysics Data System (ADS)

    Miller, Jacob; Sanders, Stephen; Miyake, Akimasa

    2017-12-01

    While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.

  2. Applications of the first digit law to measure correlations.

    PubMed

    Gramm, R; Yost, J; Su, Q; Grobe, R

    2017-04-01

    The quasiempirical Benford law predicts that the distribution of the first significant digit of random numbers obtained from mixed probability distributions is surprisingly meaningful and reveals some universal behavior. We generalize this finding to examine the joint first-digit probability of a pair of two random numbers and show that undetectable correlations by means of the usual covariance-based measure can be identified in the statistics of the corresponding first digits. We illustrate this new measure by analyzing the correlations and anticorrelations of the positions of two interacting particles in their quantum mechanical ground state. This suggests that by using this measure, the presence or absence of correlations can be determined even if only the first digit of noisy experimental data can be measured accurately.

  3. Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome

    NASA Astrophysics Data System (ADS)

    Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah

    2017-06-01

    Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.

  4. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  5. The probability of misassociation between neighboring targets

    NASA Astrophysics Data System (ADS)

    Areta, Javier A.; Bar-Shalom, Yaakov; Rothrock, Ronald

    2008-04-01

    This paper presents procedures to calculate the probability that the measurement originating from an extraneous target will be (mis)associated with a target of interest for the cases of Nearest Neighbor and Global association. It is shown that these misassociation probabilities depend, under certain assumptions, on a particular - covariance weighted - norm of the difference between the targets' predicted measurements. For the Nearest Neighbor association, the exact solution, obtained for the case of equal innovation covariances, is based on a noncentral chi-square distribution. An approximate solution is also presented for the case of unequal innovation covariances. For the Global case an approximation is presented for the case of "similar" innovation covariances. In the general case of unequal innovation covariances where this approximation fails, an exact method based on the inversion of the characteristic function is presented. The theoretical results, confirmed by Monte Carlo simulations, quantify the benefit of Global vs. Nearest Neighbor association. These results are applied to problems of single sensor as well as centralized fusion architecture multiple sensor tracking.

  6. In-beam Fission Study at JAEA

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  7. In-beam fission study for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, Katsuhisa

    2013-12-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si + 238U and 34S + 238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections for seaborgium and hassium isotopes.

  8. Kolmogorov complexity, statistical regularization of inverse problems, and Birkhoff's formalization of beauty

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha

    1998-09-01

    Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.

  9. Wolf Attack Probability: A Theoretical Security Measure in Biometric Authentication Systems

    NASA Astrophysics Data System (ADS)

    Une, Masashi; Otsuka, Akira; Imai, Hideki

    This paper will propose a wolf attack probability (WAP) as a new measure for evaluating security of biometric authentication systems. The wolf attack is an attempt to impersonate a victim by feeding “wolves” into the system to be attacked. The “wolf” means an input value which can be falsely accepted as a match with multiple templates. WAP is defined as a maximum success probability of the wolf attack with one wolf sample. In this paper, we give a rigorous definition of the new security measure which gives strength estimation of an individual biometric authentication system against impersonation attacks. We show that if one reestimates using our WAP measure, a typical fingerprint algorithm turns out to be much weaker than theoretically estimated by Ratha et al. Moreover, we apply the wolf attack to a finger-vein-pattern based algorithm. Surprisingly, we show that there exists an extremely strong wolf which falsely matches all templates for any threshold value.

  10. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    NASA Astrophysics Data System (ADS)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  11. The creation and evaluation of a model predicting the probability of conception in seasonal-calving, pasture-based dairy cows.

    PubMed

    Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John; Shalloo, Laurence; Butler, Stephen T

    2017-07-01

    Reproductive performance in pasture-based production systems has a fundamentally important effect on economic efficiency. The individual factors affecting the probability of submission and conception are multifaceted and have been extensively researched. The present study analyzed some of these factors in relation to service-level probability of conception in seasonal-calving pasture-based dairy cows to develop a predictive model of conception. Data relating to 2,966 services from 737 cows on 2 research farms were used for model development and data from 9 commercial dairy farms were used for model testing, comprising 4,212 services from 1,471 cows. The data spanned a 15-yr period and originated from seasonal-calving pasture-based dairy herds in Ireland. The calving season for the study herds extended from January to June, with peak calving in February and March. A base mixed-effects logistic regression model was created using a stepwise model-building strategy and incorporated parity, days in milk, interservice interval, calving difficulty, and predicted transmitting abilities for calving interval and milk production traits. To attempt to further improve the predictive capability of the model, the addition of effects that were not statistically significant was considered, resulting in a final model composed of the base model with the inclusion of BCS at service. The models' predictions were evaluated using discrimination to measure their ability to correctly classify positive and negative cases. Precision, recall, F-score, and area under the receiver operating characteristic curve (AUC) were calculated. Calibration tests measured the accuracy of the predicted probabilities. These included tests of overall goodness-of-fit, bias, and calibration error. Both models performed better than using the population average probability of conception. Neither of the models showed high levels of discrimination (base model AUC 0.61, final model AUC 0.62), possibly because of the narrow central range of conception rates in the study herds. The final model was found to reliably predict the probability of conception without bias when evaluated against the full external data set, with a mean absolute calibration error of 2.4%. The chosen model could be used to support a farmer's decision-making and in stochastic simulation of fertility in seasonal-calving pasture-based dairy cows. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.

    PubMed

    Allen, Jeff; Ghattas, Andrew

    2016-06-01

    Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.

  13. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis.

    PubMed

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first. work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method.

  14. Shape Classification Using Wasserstein Distance for Brain Morphometry Analysis

    PubMed Central

    Su, Zhengyu; Zeng, Wei; Wang, Yalin; Lu, Zhong-Lin; Gu, Xianfeng

    2015-01-01

    Brain morphometry study plays a fundamental role in medical imaging analysis and diagnosis. This work proposes a novel framework for brain cortical surface classification using Wasserstein distance, based on uniformization theory and Riemannian optimal mass transport theory. By Poincare uniformization theorem, all shapes can be conformally deformed to one of the three canonical spaces: the unit sphere, the Euclidean plane or the hyperbolic plane. The uniformization map will distort the surface area elements. The area-distortion factor gives a probability measure on the canonical uniformization space. All the probability measures on a Riemannian manifold form the Wasserstein space. Given any 2 probability measures, there is a unique optimal mass transport map between them, the transportation cost defines the Wasserstein distance between them. Wasserstein distance gives a Riemannian metric for the Wasserstein space. It intrinsically measures the dissimilarities between shapes and thus has the potential for shape classification. To the best of our knowledge, this is the first work to introduce the optimal mass transport map to general Riemannian manifolds. The method is based on geodesic power Voronoi diagram. Comparing to the conventional methods, our approach solely depends on Riemannian metrics and is invariant under rigid motions and scalings, thus it intrinsically measures shape distance. Experimental results on classifying brain cortical surfaces with different intelligence quotients demonstrated the efficiency and efficacy of our method. PMID:26221691

  15. Confidence Intervals for the Probability of Superiority Effect Size Measure and the Area under a Receiver Operating Characteristic Curve

    ERIC Educational Resources Information Center

    Ruscio, John; Mullen, Tara

    2012-01-01

    It is good scientific practice to the report an appropriate estimate of effect size and a confidence interval (CI) to indicate the precision with which a population effect was estimated. For comparisons of 2 independent groups, a probability-based effect size estimator (A) that is equal to the area under a receiver operating characteristic curve…

  16. Bayesian inference for heterogeneous caprock permeability based on above zone pressure monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Namhata, Argha; Small, Mitchell J.; Dilmore, Rober

    The presence of faults/ fractures or highly permeable zones in the primary sealing caprock of a CO2 storage reservoir can result in leakage of CO2. Monitoring of leakage requires the capability to detect and resolve the onset, location, and volume of leakage in a systematic and timely manner. Pressure-based monitoring possesses such capabilities. This study demonstrates a basis for monitoring network design based on the characterization of CO2 leakage scenarios through an assessment of the integrity and permeability of the caprock inferred from above zone pressure measurements. Four representative heterogeneous fractured seal types are characterized to demonstrate seal permeability rangingmore » from highly permeable to impermeable. Based on Bayesian classification theory, the probability of each fractured caprock scenario given above zone pressure measurements with measurement error is inferred. The sensitivity to injection rate and caprock thickness is also evaluated and the probability of proper classification is calculated. The time required to distinguish between above zone pressure outcomes and the associated leakage scenarios is also computed.« less

  17. Author Credit for Transdisciplinary Collaboration

    PubMed Central

    Xu, Jian; Ding, Ying; Malic, Vincent

    2015-01-01

    Transdisciplinary collaboration is the key for innovation. An evaluation mechanism is necessary to ensure that academic credit for this costly process can be allocated fairly among coauthors. This paper proposes a set of quantitative measures (e.g., t_credit and t_index) to reflect authors’ transdisciplinary contributions to publications. These measures are based on paper-topic probability distributions and author-topic probability distributions. We conduct an empirical analysis of the information retrieval domain which demonstrates that these measures effectively improve the results of harmonic_credit and h_index measures by taking into account the transdisciplinary contributions of authors. The definitions of t_credit and t_index provide a fair and effective way for research organizations to assign credit to authors of transdisciplinary publications. PMID:26375678

  18. Open star clusters in the Milky Way. Comparison of photometric and trigonometric distance scales based on Gaia TGAS data

    NASA Astrophysics Data System (ADS)

    Kovaleva, Dana A.; Piskunov, Anatoly E.; Kharchenko, Nina V.; Röser, Siegfried; Schilbach, Elena; Scholz, Ralf-Dieter; Reffert, Sabine; Yen, Steffi X.

    2017-10-01

    Context. The global survey of star clusters in the Milky Way (MWSC) is a comprehensive list of 3061 objects that provides, among other parameters, distances to clusters based on isochrone fitting. The Tycho-Gaia Astrometric Solution (TGAS) catalogue, which is a part of Gaia data release 1 (Gaia DR1), delivers accurate trigonometric parallax measurements for more than 2 million stars, including those in star clusters. Aims: We compare the open cluster photometric distance scale with the measurements given by the trigonometric parallaxes from TGAS to evaluate the consistency between these values. Methods: The average parallaxes of probable cluster members available in TGAS provide the trigonometric distance scale of open clusters, while the photometric scale is given by the distances published in the MWSC. Sixty-four clusters are suited for comparison as they have more than 16 probable members with parallax measurements in TGAS. We computed the average parallaxes of the probable members and compared these to the photometric parallaxes derived within the MWSC. Results: We find a good agreement between the trigonometric TGAS-based and the photometric MWSC-based distance scales of open clusters, which for distances less than 2.3 kpc coincide at a level of about 0.1 mas with no dependence on the distance. If at all, there is a slight systematic offset along the Galactic equator between 30° and 160° galactic longitude.

  19. On-line prognosis of fatigue crack propagation based on Gaussian weight-mixture proposal particle filter.

    PubMed

    Chen, Jian; Yuan, Shenfang; Qiu, Lei; Wang, Hui; Yang, Weibo

    2018-01-01

    Accurate on-line prognosis of fatigue crack propagation is of great meaning for prognostics and health management (PHM) technologies to ensure structural integrity, which is a challenging task because of uncertainties which arise from sources such as intrinsic material properties, loading, and environmental factors. The particle filter algorithm has been proved to be a powerful tool to deal with prognostic problems those are affected by uncertainties. However, most studies adopted the basic particle filter algorithm, which uses the transition probability density function as the importance density and may suffer from serious particle degeneracy problem. This paper proposes an on-line fatigue crack propagation prognosis method based on a novel Gaussian weight-mixture proposal particle filter and the active guided wave based on-line crack monitoring. Based on the on-line crack measurement, the mixture of the measurement probability density function and the transition probability density function is proposed to be the importance density. In addition, an on-line dynamic update procedure is proposed to adjust the parameter of the state equation. The proposed method is verified on the fatigue test of attachment lugs which are a kind of important joint components in aircraft structures. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Probability concepts in quality risk management.

    PubMed

    Claycamp, H Gregg

    2012-01-01

    Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management.

  1. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Quantifying Similarity and Distance Measures for Vector-Based Datasets: Histograms, Signals, and Probability Distribution Functions

    DTIC Science & Technology

    2017-02-01

    note, a number of different measures implemented in both MATLAB and Python as functions are used to quantify similarity/distance between 2 vector-based...this technical note are widely used and may have an important role when computing the distance and similarity of large datasets and when considering high...throughput processes. In this technical note, a number of different measures implemented in both MAT- LAB and Python as functions are used to

  3. Performance of toxicity probability interval based designs in contrast to the continual reassessment method

    PubMed Central

    Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.

    2016-01-01

    Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150

  4. Probable Posttraumatic Stress Disorder in the US Veteran Population According to DSM-5: Results From the National Health and Resilience in Veterans Study.

    PubMed

    Wisco, Blair E; Marx, Brian P; Miller, Mark W; Wolf, Erika J; Mota, Natalie P; Krystal, John H; Southwick, Steven M; Pietrzak, Robert H

    2016-11-01

    With the publication of DSM-5, important changes were made to the diagnostic criteria for posttraumatic stress disorder (PTSD), including the addition of 3 new symptoms. Some have argued that these changes will further increase the already high rates of comorbidity between PTSD and other psychiatric disorders. This study examined the prevalence of DSM-5 PTSD, conditional probability of PTSD given certain trauma exposures, endorsement of specific PTSD symptoms, and psychiatric comorbidities in the US veteran population. Data were analyzed from the National Health and Resilience in Veterans Study (NHRVS), a Web-based survey of a cross-sectional, nationally representative, population-based sample of 1,484 US veterans, which was fielded from September through October 2013. Probable PTSD was assessed using the PTSD Checklist-5. The weighted lifetime and past-month prevalence of probable DSM-5 PTSD was 8.1% (SE = 0.7%) and 4.7% (SE = 0.6%), respectively. Conditional probability of lifetime probable PTSD ranged from 10.1% (sudden death of close family member or friend) to 28.0% (childhood sexual abuse). The DSM-5 PTSD symptoms with the lowest prevalence among veterans with probable PTSD were trauma-related amnesia and reckless and self-destructive behavior. Probable PTSD was associated with increased odds of mood and anxiety disorders (OR = 7.6-62.8, P < .001), substance use disorders (OR = 3.9-4.5, P < .001), and suicidal behaviors (OR = 6.7-15.1, P < .001). In US veterans, the prevalence of DSM-5 probable PTSD, conditional probability of probable PTSD, and odds of psychiatric comorbidity were similar to prior findings with DSM-IV-based measures; we found no evidence that changes in DSM-5 increase psychiatric comorbidity. Results underscore the high rates of exposure to both military and nonmilitary trauma and the high public health burden of DSM-5 PTSD and comorbid conditions in veterans. © Copyright 2016 Physicians Postgraduate Press, Inc.

  5. Relations between the single-pass and double-pass transition probabilities in quantum systems with two and three states

    NASA Astrophysics Data System (ADS)

    Vitanov, Nikolay V.

    2018-05-01

    In the experimental determination of the population transfer efficiency between discrete states of a coherently driven quantum system it is often inconvenient to measure the population of the target state. Instead, after the interaction that transfers the population from the initial state to the target state, a second interaction is applied which brings the system back to the initial state, the population of which is easy to measure and normalize. If the transition probability is p in the forward process, then classical intuition suggests that the probability to return to the initial state after the backward process should be p2. However, this classical expectation is generally misleading because it neglects interference effects. This paper presents a rigorous theoretical analysis based on the SU(2) and SU(3) symmetries of the propagators describing the evolution of quantum systems with two and three states, resulting in explicit analytic formulas that link the two-step probabilities to the single-step ones. Explicit examples are given with the popular techniques of rapid adiabatic passage and stimulated Raman adiabatic passage. The present results suggest that quantum-mechanical probabilities degrade faster in repeated processes than classical probabilities. Therefore, the actual single-pass efficiencies in various experiments, calculated from double-pass probabilities, might have been greater than the reported values.

  6. Modeling highway travel time distribution with conditional probability models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less

  7. Laser Ignition Microthruster Experiments on KKS-1

    NASA Astrophysics Data System (ADS)

    Nakano, Masakatsu; Koizumi, Hiroyuki; Watanabe, Masashi; Arakawa, Yoshihiro

    A laser ignition microthruster has been developed for microsatellites. Thruster performances such as impulse and ignition probability were measured, using boron potassium nitrate (B/KNO3) solid propellant ignited by a 1 W CW laser diode. The measured impulses were 60 mNs ± 15 mNs with almost 100 % ignition probability. The effect of the mixture ratios of B/KNO3 on thruster performance was also investigated, and it was shown that mixture ratios between B/KNO3/binder = 28/70/2 and 38/60/2 exhibited both high ignition probability and high impulse. Laser ignition thrusters designed and fabricated based on these data became the first non-conventional microthrusters on the Kouku Kousen Satellite No. 1 (KKS-1) microsatellite that was launched by a H2A rocket as one of six piggyback satellites in January 2009.

  8. Quantum interval-valued probability: Contextuality and the Born rule

    NASA Astrophysics Data System (ADS)

    Tai, Yu-Tsung; Hanson, Andrew J.; Ortiz, Gerardo; Sabry, Amr

    2018-05-01

    We present a mathematical framework based on quantum interval-valued probability measures to study the effect of experimental imperfections and finite precision measurements on defining aspects of quantum mechanics such as contextuality and the Born rule. While foundational results such as the Kochen-Specker and Gleason theorems are valid in the context of infinite precision, they fail to hold in general in a world with limited resources. Here we employ an interval-valued framework to establish bounds on the validity of those theorems in realistic experimental environments. In this way, not only can we quantify the idea of finite-precision measurement within our theory, but we can also suggest a possible resolution of the Meyer-Mermin debate on the impact of finite-precision measurement on the Kochen-Specker theorem.

  9. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    PubMed Central

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  10. Threatened species and the potential loss of phylogenetic diversity: conservation scenarios based on estimated extinction probabilities and phylogenetic risk analysis.

    PubMed

    Faith, Daniel P

    2008-12-01

    New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species. Probabilistic PD provides a framework for single-species assessment that is well-integrated with a broader measurement of impacts on PD owing to climate change and other factors.

  11. A new method for estimating the probable maximum hail loss of a building portfolio based on hailfall intensity determined by radar measurements

    NASA Astrophysics Data System (ADS)

    Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.

    2003-04-01

    Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.

  12. Combining Probability Distributions of Wind Waves and Sea Level Variations to Assess Return Periods of Coastal Floods

    NASA Astrophysics Data System (ADS)

    Leijala, U.; Bjorkqvist, J. V.; Pellikka, H.; Johansson, M. M.; Kahma, K. K.

    2017-12-01

    Predicting the behaviour of the joint effect of sea level and wind waves is of great significance due to the major impact of flooding events in densely populated coastal regions. As mean sea level rises, the effect of sea level variations accompanied by the waves will be even more harmful in the future. The main challenge when evaluating the effect of waves and sea level variations is that long time series of both variables rarely exist. Wave statistics are also highly location-dependent, thus requiring wave buoy measurements and/or high-resolution wave modelling. As an initial approximation of the joint effect, the variables may be treated as independent random variables, to achieve the probability distribution of their sum. We present results of a case study based on three probability distributions: 1) wave run-up constructed from individual wave buoy measurements, 2) short-term sea level variability based on tide gauge data, and 3) mean sea level projections based on up-to-date regional scenarios. The wave measurements were conducted during 2012-2014 on the coast of city of Helsinki located in the Gulf of Finland in the Baltic Sea. The short-term sea level distribution contains the last 30 years (1986-2015) of hourly data from Helsinki tide gauge, and the mean sea level projections are scenarios adjusted for the Gulf of Finland. Additionally, we present a sensitivity test based on six different theoretical wave height distributions representing different wave behaviour in relation to sea level variations. As these wave distributions are merged with one common sea level distribution, we can study how the different shapes of the wave height distribution affect the distribution of the sum, and which one of the components is dominating under different wave conditions. As an outcome of the method, we obtain a probability distribution of the maximum elevation of the continuous water mass, which enables a flexible tool for evaluating different risk levels in the current and future climate.

  13. The cognitive behavioural prevention of suicide in psychosis: a clinical trial.

    PubMed

    Tarrier, Nicholas; Kelly, James; Maqsood, Sehar; Snelson, Natasha; Maxwell, Janet; Law, Heather; Dunn, Graham; Gooding, Patricia

    2014-07-01

    Suicide behaviour in psychosis is a significant clinical and social problem. There is a dearth of evidence for psychological interventions designed to reduce suicide risk in this population. To evaluate a novel, manualised, cognitive behavioural treatment protocol (CBSPp) based upon an empirically validated theoretical model. A randomly controlled trial with independent and masked allocated and assessment of CBSPp with TAU (n=25, 24 sessions) compared to TAU alone (n=24) using standardised assessments. Measures of suicide probability, and suicidal ideation were the primary outcomes and measures of hopelessness, depression, psychotic symptoms, functioning, and self-esteem were the secondary outcomes, assessed at 4 and 6 months follow-up. The CBSPp group improved differentially to the TAU group on two out of three primary outcome measures of suicidal ideation and suicide probability, and on secondary outcomes of hopelessness related to suicide probability, depression, some psychotic symptoms and self-esteem. CBSPp is a feasible intervention which has the potential to reduce proxy measures of suicide in psychotic patients. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Efficient and faithful remote preparation of arbitrary three- and four-particle -class entangled states

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Hu, You-Di; Wang, Zhe-Qiang; Ye, Liu

    2015-06-01

    We develop two efficient measurement-based schemes for remotely preparing arbitrary three- and four-particle W-class entangled states by utilizing genuine tripartite Greenberg-Horn-Zeilinger-type states as quantum channels, respectively. Through appropriate local operations and classical communication, the desired states can be faithfully retrieved at the receiver's place with certain probability. Compared with the previously existing schemes, the success probability in current schemes is greatly increased. Moreover, the required classical communication cost is calculated as well. Further, several attractive discussions on the properties of the presented schemes, including the success probability and reducibility, are made. Remarkably, the proposed schemes can be faithfully achieved with unity total success probability when the employed channels are reduced into maximally entangled ones.

  15. On Measuring Quantitative Interpretations of Reasonable Doubt

    ERIC Educational Resources Information Center

    Dhami, Mandeep K.

    2008-01-01

    Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…

  16. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART II: EFFECTS OF MISSING DATA AND IMPRECISION

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...

  17. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    NASA Astrophysics Data System (ADS)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  18. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  19. Operationalizing System Importance Measures for Assessing System of System Resilience

    NASA Astrophysics Data System (ADS)

    Chandrahasa, Rakshit

    In recent times, there has been a shift in focus from component level to system level analysis and an increasing effort to understand and design resilience into the system. Several efforts have been carried out in creating metrics to analyse resilience. Understanding and implementing system resilience in complex System of Systems will help us in building safer and resilient systems. System Importance Measures (SIMs) was formulated to analyse System of System resilience and help in designing a resilient SoS. Here, we operationalize these System Importance Measures for designing a resilient SoS. We first look at the existing methodology to improve the visual representation of system resilience and its usability. We demonstrate this using our first case study with a Naval warfare SoS. We incorporate probability into the SIM formulation. We expand the existing SIMs to quantify the effects of disruptions and mitigation likelihoods. We built a second case study based on Air transportation networks and demonstrated our expanded metrics in both the case studies. SIM based analysis of SoS resilience provides us with two different analysis of resilience, with and without probability. Having an outlook on how the resilience changes with a probability of disruptions can aid the designer making informed choices on design changes and help in creating a resilient SoS.

  20. The performance of different propensity score methods for estimating absolute effects of treatments on survival outcomes: A simulation study.

    PubMed

    Austin, Peter C; Schuster, Tibor

    2016-10-01

    Observational studies are increasingly being used to estimate the effect of treatments, interventions and exposures on outcomes that can occur over time. Historically, the hazard ratio, which is a relative measure of effect, has been reported. However, medical decision making is best informed when both relative and absolute measures of effect are reported. When outcomes are time-to-event in nature, the effect of treatment can also be quantified as the change in mean or median survival time due to treatment and the absolute reduction in the probability of the occurrence of an event within a specified duration of follow-up. We describe how three different propensity score methods, propensity score matching, stratification on the propensity score and inverse probability of treatment weighting using the propensity score, can be used to estimate absolute measures of treatment effect on survival outcomes. These methods are all based on estimating marginal survival functions under treatment and lack of treatment. We then conducted an extensive series of Monte Carlo simulations to compare the relative performance of these methods for estimating the absolute effects of treatment on survival outcomes. We found that stratification on the propensity score resulted in the greatest bias. Caliper matching on the propensity score and a method based on earlier work by Cole and Hernán tended to have the best performance for estimating absolute effects of treatment on survival outcomes. When the prevalence of treatment was less extreme, then inverse probability of treatment weighting-based methods tended to perform better than matching-based methods. © The Author(s) 2014.

  1. Development of a voltage-dependent current noise algorithm for conductance-based stochastic modelling of auditory nerve fibres.

    PubMed

    Badenhorst, Werner; Hanekom, Tania; Hanekom, Johan J

    2016-12-01

    This study presents the development of an alternative noise current term and novel voltage-dependent current noise algorithm for conductance-based stochastic auditory nerve fibre (ANF) models. ANFs are known to have significant variance in threshold stimulus which affects temporal characteristics such as latency. This variance is primarily caused by the stochastic behaviour or microscopic fluctuations of the node of Ranvier's voltage-dependent sodium channels of which the intensity is a function of membrane voltage. Though easy to implement and low in computational cost, existing current noise models have two deficiencies: it is independent of membrane voltage, and it is unable to inherently determine the noise intensity required to produce in vivo measured discharge probability functions. The proposed algorithm overcomes these deficiencies while maintaining its low computational cost and ease of implementation compared to other conductance and Markovian-based stochastic models. The algorithm is applied to a Hodgkin-Huxley-based compartmental cat ANF model and validated via comparison of the threshold probability and latency distributions to measured cat ANF data. Simulation results show the algorithm's adherence to in vivo stochastic fibre characteristics such as an exponential relationship between the membrane noise and transmembrane voltage, a negative linear relationship between the log of the relative spread of the discharge probability and the log of the fibre diameter and a decrease in latency with an increase in stimulus intensity.

  2. The symbiosis of photometry and radial-velocity measurements

    NASA Technical Reports Server (NTRS)

    Cochran, William D.

    1994-01-01

    The FRESIP mission is optimized to detect the inner planets of a planetary system. According to the current paradigm of planet formation, these planets will probably be small Earth-sized objects. Ground-based radial-velocity programs now have the sensitivity to detect Jovian-mass planets in orbit around bright solar-type stars. We expect the more massive planets to form in the outer regions of a proto-stellar nebula. These two types of measurements will very nicely complement each other, as they have highest detection probability for very different types of planets. The combination of FRESIP photometry and ground-based spectra will provide independent confirmation of the existence of planetary systems in orbit around other stars. Such detection of both terrestrial and Jovian planets in orbit around the same star is essential to test our understanding of planet formation.

  3. Risk Analysis of Earth-Rock Dam Failures Based on Fuzzy Event Tree Method

    PubMed Central

    Fu, Xiao; Gu, Chong-Shi; Su, Huai-Zhi; Qin, Xiang-Nan

    2018-01-01

    Earth-rock dams make up a large proportion of the dams in China, and their failures can induce great risks. In this paper, the risks associated with earth-rock dam failure are analyzed from two aspects: the probability of a dam failure and the resulting life loss. An event tree analysis method based on fuzzy set theory is proposed to calculate the dam failure probability. The life loss associated with dam failure is summarized and refined to be suitable for Chinese dams from previous studies. The proposed method and model are applied to one reservoir dam in Jiangxi province. Both engineering and non-engineering measures are proposed to reduce the risk. The risk analysis of the dam failure has essential significance for reducing dam failure probability and improving dam risk management level. PMID:29710824

  4. Evaluating detection and estimation capabilities of magnetometer-based vehicle sensors

    NASA Astrophysics Data System (ADS)

    Slater, David M.; Jacyna, Garry M.

    2013-05-01

    In an effort to secure the northern and southern United States borders, MITRE has been tasked with developing Modeling and Simulation (M&S) tools that accurately capture the mapping between algorithm-level Measures of Performance (MOP) and system-level Measures of Effectiveness (MOE) for current/future surveillance systems deployed by the the Customs and Border Protection Office of Technology Innovations and Acquisitions (OTIA). This analysis is part of a larger M&S undertaking. The focus is on two MOPs for magnetometer-based Unattended Ground Sensors (UGS). UGS are placed near roads to detect passing vehicles and estimate properties of the vehicle's trajectory such as bearing and speed. The first MOP considered is the probability of detection. We derive probabilities of detection for a network of sensors over an arbitrary number of observation periods and explore how the probability of detection changes when multiple sensors are employed. The performance of UGS is also evaluated based on the level of variance in the estimation of trajectory parameters. We derive the Cramer-Rao bounds for the variances of the estimated parameters in two cases: when no a priori information is known and when the parameters are assumed to be Gaussian with known variances. Sample results show that UGS perform significantly better in the latter case.

  5. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  6. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  7. Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.

    2016-12-01

    Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.

  8. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    PubMed

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Application of fuzzy fault tree analysis based on modified fuzzy AHP and fuzzy TOPSIS for fire and explosion in the process industry.

    PubMed

    Yazdi, Mohammad; Korhan, Orhan; Daneshvar, Sahand

    2018-05-09

    This study aimed at establishing fault tree analysis (FTA) using expert opinion to compute the probability of an event. To find the probability of the top event (TE), all probabilities of the basic events (BEs) should be available when the FTA is drawn. In this case, employing expert judgment can be used as an alternative to failure data in an awkward situation. The fuzzy analytical hierarchy process as a standard technique is used to give a specific weight to each expert, and fuzzy set theory is engaged for aggregating expert opinion. In this regard, the probability of BEs will be computed and, consequently, the probability of the TE obtained using Boolean algebra. Additionally, to reduce the probability of the TE in terms of three parameters (safety consequences, cost and benefit), the importance measurement technique and modified TOPSIS was employed. The effectiveness of the proposed approach is demonstrated with a real-life case study.

  10. Scoring and Classifying Examinees Using Measurement Decision Theory

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  11. Direct Assistance: USAID Has Taken Positive Action to Assess Afghan Ministries’ Ability to Manage Donor Funds, but Concerns Remain

    DTIC Science & Technology

    2014-01-01

    with the adverse event’s potential impact , ranging from negligible to catastrophic. Appendix V includes a matrix of how USAID/Afghanistan assigns risk...International Development (USAID) assigns risk ratings based on potential impact and probability of occurrence of an identified risk. The impact measures...frequent. Combining impact and probability factors categorize risk clusters of critical, high, medium and low categories. Although subjective, it is

  12. Financial derivative pricing under probability operator via Esscher transfomation

    NASA Astrophysics Data System (ADS)

    Achi, Godswill U.

    2014-10-01

    The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing9 where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula using the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φx(u) of Xt we recuperate the Black-Scholes formula for financial derivative prices.

  13. Feasibility study of direct spectra measurements for Thomson scattered signals for KSTAR fusion-grade plasmas

    NASA Astrophysics Data System (ADS)

    Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.

    2017-11-01

    Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.

  14. The role of the uncertainty of measurement of serum creatinine concentrations in the diagnosis of acute kidney injury.

    PubMed

    Kin Tekce, Buket; Tekce, Hikmet; Aktas, Gulali; Uyeturk, Ugur

    2016-01-01

    Uncertainty of measurement is the numeric expression of the errors associated with all measurements taken in clinical laboratories. Serum creatinine concentration is the most common diagnostic marker for acute kidney injury. The goal of this study was to determine the effect of the uncertainty of measurement of serum creatinine concentrations on the diagnosis of acute kidney injury. We calculated the uncertainty of measurement of serum creatinine according to the Nordtest Guide. Retrospectively, we identified 289 patients who were evaluated for acute kidney injury. Of the total patient pool, 233 were diagnosed with acute kidney injury using the AKIN classification scheme and then were compared using statistical analysis. We determined nine probabilities of the uncertainty of measurement of serum creatinine concentrations. There was a statistically significant difference in the number of patients diagnosed with acute kidney injury when uncertainty of measurement was taken into consideration (first probability compared to the fifth p = 0.023 and first probability compared to the ninth p = 0.012). We found that the uncertainty of measurement for serum creatinine concentrations was an important factor for correctly diagnosing acute kidney injury. In addition, based on the AKIN classification scheme, minimizing the total allowable error levels for serum creatinine concentrations is necessary for the accurate diagnosis of acute kidney injury by clinicians.

  15. Rate and reaction probability of the surface reaction between ozone and dihydromyrcenol measured in a bench scale reactor and a room-sized chamber

    NASA Astrophysics Data System (ADS)

    Shu, Shi; Morrison, Glenn C.

    2012-02-01

    Low volatility terpenoids emitted from consumer products can react with ozone on surfaces and may significantly alter concentrations of ozone, terpenoids and reaction products in indoor air. We measured the reaction probability and a second-order surface-specific reaction rate for the ozonation of dihydromyrcenol, a representative indoor terpenoid, adsorbed onto polyvinylchloride (PVC), glass, and latex paint coated spheres. The reaction probability ranged from (0.06-8.97) × 10 -5 and was very sensitive to humidity, substrate and mass adsorbed. The average surface reaction probability is about 10 times greater than that for the gas-phase reaction. The second-order surface-specific rate coefficient ranged from (0.32-7.05) × 10 -15 cm 4 s -1 molecule -1and was much less sensitive to humidity, substrate, or mass adsorbed. We also measured the ozone deposition velocity due to adsorbed dihydromyrcenol on painted drywall in a room-sized chamber, Based on that, we calculated the rate coefficient ((0.42-1.6) × 10 -15 cm 4 molecule -1 s -1), which was consistent with that derived from bench-scale experiments for the latex paint under similar conditions. We predict that more than 95% of dihydromyrcenol oxidation takes place on indoor surfaces, rather than in building air.

  16. Principal Component Analysis Based Measure of Structural Holes

    NASA Astrophysics Data System (ADS)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  17. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  18. Network Security Risk Assessment System Based on Attack Graph and Markov Chain

    NASA Astrophysics Data System (ADS)

    Sun, Fuxiong; Pi, Juntao; Lv, Jin; Cao, Tian

    2017-10-01

    Network security risk assessment technology can be found in advance of the network problems and related vulnerabilities, it has become an important means to solve the problem of network security. Based on attack graph and Markov chain, this paper provides a Network Security Risk Assessment Model (NSRAM). Based on the network infiltration tests, NSRAM generates the attack graph by the breadth traversal algorithm. Combines with the international standard CVSS, the attack probability of atomic nodes are counted, and then the attack transition probabilities of ones are calculated by Markov chain. NSRAM selects the optimal attack path after comprehensive measurement to assessment network security risk. The simulation results show that NSRAM can reflect the actual situation of network security objectively.

  19. Scale Dependence of Spatiotemporal Intermittence of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Siddani, Ravi K.

    2011-01-01

    It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.

  20. Risk Assessment: Evidence Base

    NASA Technical Reports Server (NTRS)

    Johnson-Throop, Kathy A.

    2007-01-01

    Human systems PRA (Probabilistic Risk Assessment: a) Provides quantitative measures of probability, consequence, and uncertainty; and b) Communicates risk and informs decision-making. Human health risks rated highest in ISS PRA are based on 1997 assessment of clinical events in analog operational settings. Much work remains to analyze remaining human health risks identified in Bioastronautics Roadmap.

  1. The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.

    2013-01-01

    We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.

  2. Statistically based material properties: A military handbook-17 perspective

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Vangel, Mark G.

    1990-01-01

    The statistical procedures and their importance in obtaining composite material property values in designing structures for aircraft and military combat systems are described. The property value is such that the strength exceeds this value with a prescribed probability with 95 percent confidence in the assertion. The survival probabilities are the 99th percentile and 90th percentile for the A and B basis values respectively. The basis values for strain to failure measurements are defined in a similar manner. The B value is the primary concern.

  3. Teleportation of Three-Qubit State via Six-qubit Cluster State

    NASA Astrophysics Data System (ADS)

    Yu, Li-zhi; Sun, Shao-xin

    2015-05-01

    A scheme of probabilistic teleportation was proposed. In this scheme, we took a six-qubit nonmaximally cluster state as the quantum channel to teleport an unknown three-qubit entangled state. Based on Bob's three times Bell state measurement (BSM) results, the receiver Bob can by introducing an auxiliary particle and the appropriate transformation to reconstruct the initial state with a certain probability. We found that, the successful transmission probability depend on the absolute value of coefficients of two of six particle cluster state minimum.

  4. Rank-k Maximal Statistics for Divergence and Probability of Misclassification

    NASA Technical Reports Server (NTRS)

    Decell, H. P., Jr.

    1972-01-01

    A technique is developed for selecting from n-channel multispectral data some k combinations of the n-channels upon which to base a given classification technique so that some measure of the loss of the ability to distinguish between classes, using the compressed k-dimensional data, is minimized. Information loss in compressing the n-channel data to k channels is taken to be the difference in the average interclass divergences (or probability of misclassification) in n-space and in k-space.

  5. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    DTIC Science & Technology

    2012-09-01

    0 : t) denotes all measurements observed up to time t. The goal of prognosis is to determine the end of (use- ful) life ( EOL ) of a system, and/or its...remaining useful life (RUL). For a given fault, f , using the fault estimate, p(xf (t),θf (t)|y(0 : t)), a probability distribution of EOL , p(EOLf (tP...is stochas- tic, EOL /RUL are random variables and we represent them by probability distributions. The acceptable behavior of the system is expressed

  6. Uninformative Prior Multiple Target Tracking Using Evidential Particle Filters

    NASA Astrophysics Data System (ADS)

    Worthy, J. L., III; Holzinger, M. J.

    Space situational awareness requires the ability to initialize state estimation from short measurements and the reliable association of observations to support the characterization of the space environment. The electro-optical systems used to observe space objects cannot fully characterize the state of an object given a short, unobservable sequence of measurements. Further, it is difficult to associate these short-arc measurements if many such measurements are generated through the observation of a cluster of satellites, debris from a satellite break-up, or from spurious detections of an object. An optimization based, probabilistic short-arc observation association approach coupled with a Dempster-Shafer based evidential particle filter in a multiple target tracking framework is developed and proposed to address these problems. The optimization based approach is shown in literature to be computationally efficient and can produce probabilities of association, state estimates, and covariances while accounting for systemic errors. Rigorous application of Dempster-Shafer theory is shown to be effective at enabling ignorance to be properly accounted for in estimation by augmenting probability with belief and plausibility. The proposed multiple hypothesis framework will use a non-exclusive hypothesis formulation of Dempster-Shafer theory to assign belief mass to candidate association pairs and generate tracks based on the belief to plausibility ratio. The proposed algorithm is demonstrated using simulated observations of a GEO satellite breakup scenario.

  7. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus

    PubMed Central

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-01-01

    Abstract DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure. PMID:29800455

  8. Estimation of the radiation-induced DNA double-strand breaks number by considering cell cycle and absorbed dose per cell nucleus.

    PubMed

    Mori, Ryosuke; Matsuya, Yusuke; Yoshii, Yuji; Date, Hiroyuki

    2018-05-01

    DNA double-strand breaks (DSBs) are thought to be the main cause of cell death after irradiation. In this study, we estimated the probability distribution of the number of DSBs per cell nucleus by considering the DNA amount in a cell nucleus (which depends on the cell cycle) and the statistical variation in the energy imparted to the cell nucleus by X-ray irradiation. The probability estimation of DSB induction was made following these procedures: (i) making use of the Chinese Hamster Ovary (CHO)-K1 cell line as the target example, the amounts of DNA per nucleus in the logarithmic and the plateau phases of the growth curve were measured by flow cytometry with propidium iodide (PI) dyeing; (ii) the probability distribution of the DSB number per cell nucleus for each phase after irradiation with 1.0 Gy of 200 kVp X-rays was measured by means of γ-H2AX immunofluorescent staining; (iii) the distribution of the cell-specific energy deposition via secondary electrons produced by the incident X-rays was calculated by WLTrack (in-house Monte Carlo code); (iv) according to a mathematical model for estimating the DSB number per nucleus, we deduced the induction probability density of DSBs based on the measured DNA amount (depending on the cell cycle) and the calculated dose per nucleus. The model exhibited DSB induction probabilities in good agreement with the experimental results for the two phases, suggesting that the DNA amount (depending on the cell cycle) and the statistical variation in the local energy deposition are essential for estimating the DSB induction probability after X-ray exposure.

  9. A novel method to identify herds with an increased probability of disease introduction due to animal trade.

    PubMed

    Frössling, Jenny; Nusinovici, Simon; Nöremark, Maria; Widgren, Stefan; Lindberg, Ann

    2014-11-15

    In the design of surveillance, there is often a desire to target high risk herds. Such risk-based approaches result in better allocation of resources and improve the performance of surveillance activities. For many contagious animal diseases, movement of live animals is a main route of transmission, and because of this, herds that purchase many live animals or have a large contact network due to trade can be seen as a high risk stratum of the population. This paper presents a new method to assess herd disease risk in animal movement networks. It is an improvement to current network measures that takes direction, temporal order, and also movement size and probability of disease into account. In the study, the method was used to calculate a probability of disease ratio (PDR) of herds in simulated datasets, and of real herds based on animal movement data from dairy herds included in a bulk milk survey for Coxiella burnetii. Known differences in probability of disease are easily incorporated in the calculations and the PDR was calculated while accounting for regional differences in probability of disease, and also by applying equal probability of disease throughout the population. Each herd's increased probability of disease due to purchase of animals was compared to both the average herd and herds within the same risk stratum. The results show that the PDR is able to capture the different circumstances related to disease prevalence and animal trade contact patterns. Comparison of results based on inclusion or exclusion of differences in risk also highlights how ignoring such differences can influence the ability to correctly identify high risk herds. The method shows a potential to be useful for risk-based surveillance, in the classification of herds in control programmes or to represent influential contacts in risk factor studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  11. Assessing the Organizational Social Context (OSC) of Child Welfare Systems: Implications for Research and Practice

    ERIC Educational Resources Information Center

    Glisson, Charles; Green, Philip; Williams, Nathaniel J.

    2012-01-01

    Objective: The study: (1) provides the first assessment of the a "priori" measurement model and psychometric properties of the Organizational Social Context (OSC) measurement system in a US nationwide probability sample of child welfare systems; (2) illustrates the use of the OSC in constructing norm-based organizational culture and climate…

  12. The high order dispersion analysis based on first-passage-time probability in financial markets

    NASA Astrophysics Data System (ADS)

    Liu, Chenggong; Shang, Pengjian; Feng, Guochen

    2017-04-01

    The study of first-passage-time (FPT) event about financial time series has gained broad research recently, which can provide reference for risk management and investment. In this paper, a new measurement-high order dispersion (HOD)-is developed based on FPT probability to explore financial time series. The tick-by-tick data of three Chinese stock markets and three American stock markets are investigated. We classify the financial markets successfully through analyzing the scaling properties of FPT probabilities of six stock markets and employing HOD method to compare the differences of FPT decay curves. It can be concluded that long-range correlation, fat-tailed broad probability density function and its coupling with nonlinearity mainly lead to the multifractality of financial time series by applying HOD method. Furthermore, we take the fluctuation function of multifractal detrended fluctuation analysis (MF-DFA) to distinguish markets and get consistent results with HOD method, whereas the HOD method is capable of fractionizing the stock markets effectively in the same region. We convince that such explorations are relevant for a better understanding of the financial market mechanisms.

  13. Absolute measures of the completeness of the fossil record

    NASA Technical Reports Server (NTRS)

    Foote, M.; Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1999-01-01

    Measuring the completeness of the fossil record is essential to understanding evolution over long timescales, particularly when comparing evolutionary patterns among biological groups with different preservational properties. Completeness measures have been presented for various groups based on gaps in the stratigraphic ranges of fossil taxa and on hypothetical lineages implied by estimated evolutionary trees. Here we present and compare quantitative, widely applicable absolute measures of completeness at two taxonomic levels for a broader sample of higher taxa of marine animals than has previously been available. We provide an estimate of the probability of genus preservation per stratigraphic interval, and determine the proportion of living families with some fossil record. The two completeness measures use very different data and calculations. The probability of genus preservation depends almost entirely on the Palaeozoic and Mesozoic records, whereas the proportion of living families with a fossil record is influenced largely by Cenozoic data. These measurements are nonetheless highly correlated, with outliers quite explicable, and we find that completeness is rather high for many animal groups.

  14. Quantum key distribution without the wavefunction

    NASA Astrophysics Data System (ADS)

    Niestegge, Gerd

    A well-known feature of quantum mechanics is the secure exchange of secret bit strings which can then be used as keys to encrypt messages transmitted over any classical communication channel. It is demonstrated that this quantum key distribution allows a much more general and abstract access than commonly thought. The results include some generalizations of the Hilbert space version of quantum key distribution, but are based upon a general nonclassical extension of conditional probability. A special state-independent conditional probability is identified as origin of the superior security of quantum key distribution; this is a purely algebraic property of the quantum logic and represents the transition probability between the outcomes of two consecutive quantum measurements.

  15. Bayesian Cherry Picking Revisited

    NASA Astrophysics Data System (ADS)

    Garrett, Anthony J. M.; Prozesky, Victor M.; Padayachee, J.

    2004-04-01

    Tins are marketed as containing nine cherries. To fill the tins, cherries are fed into a drum containing twelve holes through which air is sucked; either zero, one or two cherries stick in each hole. Dielectric measurements are then made on each hole. Three outcomes are distinguished: empty hole (which is reliable); one cherry (which indicates one cherry with high probability, or two cherries with a complementary low probability known from calibration); or an uncertain number (which also indicates one cherry or two, with known probabilities that are quite similar). A choice can be made from which holes simultaneously to discharge contents into the tin. The sum and product rules of probability are applied in a Bayesian manner to find the distribution for the number of cherries in the tin. Based on this distribution, ways are discussed to optimise the number to nine cherries.

  16. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  17. Recent trends in the probability of high out-of-pocket medical expenses in the United States

    PubMed Central

    Baird, Katherine E

    2016-01-01

    Objective: This article measures the probability that out-of-pocket expenses in the United States exceed a threshold share of income. It calculates this probability separately by individuals’ health condition, income, and elderly status and estimates changes occurring in these probabilities between 2010 and 2013. Data and Method: This article uses nationally representative household survey data on 344,000 individuals. Logistic regressions estimate the probabilities that out-of-pocket expenses exceed 5% and alternatively 10% of income in the two study years. These probabilities are calculated for individuals based on their income, health status, and elderly status. Results: Despite favorable changes in both health policy and the economy, large numbers of Americans continue to be exposed to high out-of-pocket expenditures. For instance, the results indicate that in 2013 over a quarter of nonelderly low-income citizens in poor health spent 10% or more of their income on out-of-pocket expenses, and over 40% of this group spent more than 5%. Moreover, for Americans as a whole, the probability of spending in excess of 5% of income on out-of-pocket costs increased by 1.4 percentage points between 2010 and 2013, with the largest increases occurring among low-income Americans; the probability of Americans spending more than 10% of income grew from 9.3% to 9.6%, with the largest increases also occurring among the poor. Conclusion: The magnitude of out-of-pocket’s financial burden and the most recent upward trends in it underscore a need to develop good measures of the degree to which health care policy exposes individuals to financial risk, and to closely monitor the Affordable Care Act’s success in reducing Americans’ exposure to large medical bills. PMID:27651901

  18. Reliability of a Measure of Institutional Discrimination against Minorities

    DTIC Science & Technology

    1979-12-01

    samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small

  19. 76 FR 62331 - Atlantic Highly Migratory Species; Atlantic Shark Management Measures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-07

    ...) or 2099. The target year for rebuilding ranged from 2081 to 2257 depending on the state of nature (i... probability of rebuilding by 2099. The base model also estimated that with the current fishing mortality rate...

  20. Controlling for Frailty in Pharmacoepidemiologic Studies of Older Adults: Validation of an Existing Medicare Claims-based Algorithm.

    PubMed

    Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L

    2018-07-01

    Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.

  1. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  2. A comparison of ground-based hydroxyl airglow temperatures with SABER/TIMED measurements over 23° N, India

    NASA Astrophysics Data System (ADS)

    Parihar, Navin; Singh, Dupinder; Gurubaran, Subramanian

    2017-03-01

    Ground-based observations of OH (6, 2) Meinel band nightglow were carried out at Ranchi (23.3° N, 85.3° E), India, during January-March 2011, December 2011-May 2012 and December 2012-March 2013 using an all-sky imaging system. Near the mesopause, OH temperatures were derived from the OH (6, 2) Meinel band intensity information. A limited comparison of OH temperatures (TOH) with SABER/TIMED measurements in 30 cases was performed by defining almost coincident criterion of ±1.5° latitude-longitude and ±3 min of the ground-based observations. Using SABER OH 1.6 and 2.0 µm volume emission rate profiles as the weighing function, two sets of OH-equivalent temperature (T1. 6 and T2. 0 respectively) were estimated from its kinetic temperature profile for comparison with OH nightglow measurements. Overall, fair agreement existed between ground-based and SABER measurements in the majority of events within the limits of experimental errors. Overall, the mean value of OH-derived temperatures and SABER OH-equivalent temperatures were 197.3 ± 4.6, 192.0 ± 10.8 and 192.7 ± 10.3 K, and the ground-based temperatures were 4-5 K warmer than SABER values. A difference of 8 K or more is noted between two measurements when the peak of the OH emission layer lies in the vicinity of large temperature inversions. A comparison of OH temperatures derived using different sets of Einstein transition probabilities and SABER measurements was also performed; however, OH temperatures derived using Langhoff et al. (1986) transition probabilities were found to compare well.

  3. The Influence of Optical Coherence Tomography Measurements of Retinal Nerve Fiber Layer on Decision-Making in Glaucoma Diagnosis.

    PubMed

    Fu, Lanxing; Aspinall, Peter; Bennett, Gary; Magidson, Jay; Tatham, Andrew J

    2017-04-01

    To quantify the influence of spectral domain optical coherence tomography (SDOCT) on decision-making in patients with suspected glaucoma. A prospective cross-sectional study involving 40 eyes of 20 patients referred by community optometrists due to suspected glaucoma. All patients had disc photographs and standard automated perimetry (SAP), and results were presented to 13 ophthalmologists who estimated pre-test probability of glaucoma (0-100%) for a total of 520 observations. Ophthalmologists were then permitted to modify probabilities of disease based on SDOCT retinal nerve fiber layer (RNFL) measurements (post-test probability). The effect of information from SDOCT on decision to treat, monitor, or discharge was assessed. Agreement among graders was assessed using intraclass correlation coefficients (ICC) and correlated component regression (CCR) was used to identify variables influencing management decisions. Patients had an average age of 69.0 ± 10.1 years, SAP mean deviation of 2.71 ± 3.13 dB, and RNFL thickness of 86.2 ± 16.7 μm. Average pre-test probability of glaucoma was 37.0 ± 33.6% with SDOCT resulting in a 13.3 ± 18.1% change in estimated probability. Incorporating information from SDOCT improved agreement regarding probability of glaucoma (ICC = 0.50 (95% CI 0.38 to 0.64) without SDOCT versus 0.64 (95% CI 0.52 to 0.76) with SDOCT). SDOCT led to a change from decision to "treat or monitor" to "discharge" in 22 of 520 cases and a change from "discharge" to "treat or monitor" in 11 of 520 cases. Pre-test probability and RNFL thickness were predictors of post-test probability of glaucoma, contributing 69 and 31% of the variance in post-test probability, respectively. Information from SDOCT altered estimated probability of glaucoma and improved agreement among clinicians in those suspected of having the disease.

  4. Impact of competitor species composition on predicting diameter growth and survival rates of Douglas-fir trees in southwestern Oregon

    USGS Publications Warehouse

    Bravo, Felipe; Hann, D.W.; Maguire, Douglas A.

    2001-01-01

    Mixed conifer and hardwood stands in southwestern Oregon were studied to explore the hypothesis that competition effects on individual-tree growth and survival will differ according to the species comprising the competition measure. Likewise, it was hypothesized that competition measures should extrapolate best if crown-based surrogates are given preference over diameter-based (basal area based) surrogates. Diameter growth and probability of survival were modeled for individual Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees growing in pure stands. Alternative models expressing one-sided and two-sided competition as a function of either basal area or crown structure were then applied to other plots in which Douglas-fir was mixed with other conifers and (or) hardwood species. Crown-based variables outperformed basal area based variables as surrogates for one-sided competition in both diameter growth and survival probability, regardless of species composition. In contrast, two-sided competition was best represented by total basal area of competing trees. Surrogates reflecting differences in crown morphology among species relate more closely to the mechanics of competition for light and, hence, facilitate extrapolation to species combinations for which no observations are available.

  5. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal-Noise Reliability Measure Reflect This Precision?

    PubMed

    Staggs, Vincent S; Cramer, Emily

    2016-08-01

    Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital-acquired pressure ulcer rates and evaluate a standard signal-noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step-down, medical, surgical, and medical-surgical nursing units from 1,299 US hospitals were analyzed. Using beta-binomial models, we estimated between-unit variability (signal) and within-unit variability (noise) in annual unit pressure ulcer rates. Signal-noise reliability was computed as the ratio of between-unit variability to the total of between- and within-unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal-noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal-noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc.

  6. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  7. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations

    PubMed Central

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360

  8. New estimates of lethality of sea lamprey (Petromyzon marinus) attacks on lake trout (Salvelinus namaycush): Implications for fisheries management

    USGS Publications Warehouse

    Madenjian, C.P.; Chipman, B.D.; Marsden, J.E.

    2008-01-01

    Sea lamprey (Petromyzon marinus) control in North America costs millions of dollars each year, and control measures are guided by assessment of lamprey-induced damage to fisheries. The favored prey of sea lamprey in freshwater ecosystems has been lake trout (Salvelinus namaycush). A key parameter in assessing sea lamprey damage, as well as managing lake trout fisheries, is the probability of an adult lake trout surviving a lamprey attack. The conventional value for this parameter has been 0.55, based on laboratory experiments. In contrast, based on catch curve analysis, mark-recapture techniques, and observed wounding rates, we estimated that adult lake trout in Lake Champlain have a 0.74 probability of surviving a lamprey attack. Although sea lamprey growth in Lake Champlain was lower than that observed in Lake Huron, application of an individual-based model to both lakes indicated that the probability of surviving an attack in Lake Champlain was only 1.1 times higher than that in Lake Huron. Thus, we estimated that lake trout survive a lamprey attack in Lake Huron with a probability of 0.66. Therefore, our results suggested that lethality of a sea lamprey attack on lake trout has been overestimated in previous model applications used in fisheries management. ?? 2008 NRC.

  9. Measuring body mass index (BMI) in nursing home residents: The usefulness of measurement of arm span

    PubMed Central

    Nygaard, Harald A.

    2008-01-01

    Objective To study whether arm span can be used as substitute for measurement of height in nursing home patients for calculating body mass index (BMI). Design Explanatory observational study. Setting Assessment of 35 nursing home residents admitted to long-term stay in a nursing home. Main outcome measures Correlation between measured height and arm span and of BMI based on both measures. Results Measured height and arm span, and BMI calculated from either measure were significantly correlated, rs=0.75, p <0.001 and rs=0.89, p <0.001, respectively. The ratios of measured height and arm span and between BMIs based on height or arm span are close to 1, but the dispersion is rather large. Conclusion Arm span is a reliable substitute for measurement of height in nursing home patients. In persons with severe height reduction, arm-span-based BMI is probably more accurate than conventional height-based BMI. PMID:18297563

  10. Methodology for building confidence measures

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron L.

    2004-04-01

    This paper presents a generalized methodology for propagating known or estimated levels of individual source document truth reliability to determine the confidence level of a combined output. Initial document certainty levels are augmented by (i) combining the reliability measures of multiply sources, (ii) incorporating the truth reinforcement of related elements, and (iii) incorporating the importance of the individual elements for determining the probability of truth for the whole. The result is a measure of confidence in system output based on the establishing of links among the truth values of inputs. This methodology was developed for application to a multi-component situation awareness tool under development at the Air Force Research Laboratory in Rome, New York. Determining how improvements in data quality and the variety of documents collected affect the probability of a correct situational detection helps optimize the performance of the tool overall.

  11. Sustainable Odds

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2016-12-01

    While probability forecasting has many philosophical and mathematical attractions, it is something of a dishonest nonsense if acting on such forecasts is expected to lead to rapid ruin. Model-based probabilities, when interpreted as actionable, are shown to lead to the rapid ruin of a cooperative entity offering odds interpreting the probability forecasts at face value. Arguably, these odds would not be considered "fair", but inasmuch as some definitions of "fair odds" include this case, this presentation will focus on "sustainable odds": Odds which are not expected to lead to the rapid ruin of the cooperative under the assumption that those placing bets have no information beyond that available to the forecast system. It is argued that sustainable odds will not correspond to probabilities outside the Perfect Model Scenario, that the "implied probabilities" determined from sustainable odds will always sum to more than one, and that the excess of this sum over one reflects the skill of the forecast system, being a quantitative measure of structural model error.

  12. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy.

    PubMed

    Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.

  13. How to Calculate Renyi Entropy from Heart Rate Variability, and Why it Matters for Detecting Cardiac Autonomic Neuropathy

    PubMed Central

    Cornforth, David J.;  Tarvainen, Mika P.; Jelinek, Herbert F.

    2014-01-01

    Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311

  14. Delirium superimposed on dementia: defining disease states and course from longitudinal measurements of a multivariate index using latent class analysis and hidden Markov chains.

    PubMed

    Ciampi, Antonio; Dyachenko, Alina; Cole, Martin; McCusker, Jane

    2011-12-01

    The study of mental disorders in the elderly presents substantial challenges due to population heterogeneity, coexistence of different mental disorders, and diagnostic uncertainty. While reliable tools have been developed to collect relevant data, new approaches to study design and analysis are needed. We focus on a new analytic approach. Our framework is based on latent class analysis and hidden Markov chains. From repeated measurements of a multivariate disease index, we extract the notion of underlying state of a patient at a time point. The course of the disorder is then a sequence of transitions among states. States and transitions are not observable; however, the probability of being in a state at a time point, and the transition probabilities from one state to another over time can be estimated. Data from 444 patients with and without diagnosis of delirium and dementia were available from a previous study. The Delirium Index was measured at diagnosis, and at 2 and 6 months from diagnosis. Four latent classes were identified: fairly healthy, moderately ill, clearly sick, and very sick. Dementia and delirium could not be separated on the basis of these data alone. Indeed, as the probability of delirium increased, so did the probability of decline of mental functions. Eight most probable courses were identified, including good and poor stable courses, and courses exhibiting various patterns of improvement. Latent class analysis and hidden Markov chains offer a promising tool for studying mental disorders in the elderly. Its use may show its full potential as new data become available.

  15. Functional-diversity indices can be driven by methodological choices and species richness.

    PubMed

    Poos, Mark S; Walker, Steven C; Jackson, Donald A

    2009-02-01

    Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.

  16. Rating competitors before tournament starts: How it's affecting team progression in a soccer tournament

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini

    2014-12-01

    In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.

  17. Financial derivative pricing under probability operator via Esscher transfomation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achi, Godswill U., E-mail: achigods@yahoo.com

    2014-10-24

    The problem of pricing contingent claims has been extensively studied for non-Gaussian models, and in particular, Black- Scholes formula has been derived for the NIG asset pricing model. This approach was first developed in insurance pricing{sup 9} where the original distortion function was defined in terms of the normal distribution. This approach was later studied6 where they compared the standard Black-Scholes contingent pricing and distortion based contingent pricing. So, in this paper, we aim at using distortion operators by Cauchy distribution under a simple transformation to price contingent claim. We also show that we can recuperate the Black-Sholes formula usingmore » the distribution. Similarly, in a financial market in which the asset price represented by a stochastic differential equation with respect to Brownian Motion, the price mechanism based on characteristic Esscher measure can generate approximate arbitrage free financial derivative prices. The price representation derived involves probability Esscher measure and Esscher Martingale measure and under a new complex valued measure φ (u) evaluated at the characteristic exponents φ{sub x}(u) of X{sub t} we recuperate the Black-Scholes formula for financial derivative prices.« less

  18. Reliability analysis and fault-tolerant system development for a redundant strapdown inertial measurement unit. [inertial platforms

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.

  19. Modeling reliability measurement of interface on information system: Towards the forensic of rules

    NASA Astrophysics Data System (ADS)

    Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan

    2018-02-01

    Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.

  20. Scintillation statistics measured in an earth-space-earth retroreflector link

    NASA Technical Reports Server (NTRS)

    Bufton, J. L.

    1977-01-01

    Scintillation was measured in a vertical path from a ground-based laser transmitter to the Geos 3 satellite and back to a ground-based receiver telescope and, the experimental results were compared with analytical results presented in a companion paper (Bufton, 1977). The normalized variance, the probability density function and the power spectral density of scintillation were all measured. Moments of the satellite scintillation data in terms of normalized variance were lower than expected. The power spectrum analysis suggests that there were scintillation components at frequencies higher than the 250 Hz bandwidth available in the experiment.

  1. Probability based models for estimation of wildfire risk

    Treesearch

    Haiganoush Preisler; D. R. Brillinger; R. E. Burgan; John Benoit

    2004-01-01

    We present a probability-based model for estimating fire risk. Risk is defined using three probabilities: the probability of fire occurrence; the conditional probability of a large fire given ignition; and the unconditional probability of a large fire. The model is based on grouped data at the 1 km²-day cell level. We fit a spatially and temporally explicit non-...

  2. Experimental radiative lifetimes, branching fractions, and oscillator strengths of some levels in Tm III

    NASA Astrophysics Data System (ADS)

    Yu, Qi; Wang, Xinghao; Li, Qiu; Gong, Yimin; Dai, Zhenwen

    2018-06-01

    Natural radiative lifetimes for five even-parity levels of Tm III were measured by time-resolved laser-induced fluorescence method. The branching fraction measurements were performed based on the emission spectra of a hollow cathode lamp. By combining the measured branching fractions and the lifetime values reported in this work and in literature, experimental transition probabilities and oscillator strengths for 11 transitions were derived for the first time.

  3. Crash probability estimation via quantifying driver hazard perception.

    PubMed

    Li, Yang; Zheng, Yang; Wang, Jianqiang; Kodaka, Kenji; Li, Keqiang

    2018-07-01

    Crash probability estimation is an important method to predict the potential reduction of crash probability contributed by forward collision avoidance technologies (FCATs). In this study, we propose a practical approach to estimate crash probability, which combines a field operational test and numerical simulations of a typical rear-end crash model. To consider driver hazard perception characteristics, we define a novel hazard perception measure, called as driver risk response time, by considering both time-to-collision (TTC) and driver braking response to impending collision risk in a near-crash scenario. Also, we establish a driving database under mixed Chinese traffic conditions based on a CMBS (Collision Mitigation Braking Systems)-equipped vehicle. Applying the crash probability estimation in this database, we estimate the potential decrease in crash probability owing to use of CMBS. A comparison of the results with CMBS on and off shows a 13.7% reduction of crash probability in a typical rear-end near-crash scenario with a one-second delay of driver's braking response. These results indicate that CMBS is positive in collision prevention, especially in the case of inattentive drivers or ole drivers. The proposed crash probability estimation offers a practical way for evaluating the safety benefits in the design and testing of FCATs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Contextual Fraction as a Measure of Contextuality.

    PubMed

    Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane

    2017-08-04

    We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.

  5. Contextual Fraction as a Measure of Contextuality

    NASA Astrophysics Data System (ADS)

    Abramsky, Samson; Barbosa, Rui Soares; Mansfield, Shane

    2017-08-01

    We consider the contextual fraction as a quantitative measure of contextuality of empirical models, i.e., tables of probabilities of measurement outcomes in an experimental scenario. It provides a general way to compare the degree of contextuality across measurement scenarios; it bears a precise relationship to violations of Bell inequalities; its value, and a witnessing inequality, can be computed using linear programing; it is monotonic with respect to the "free" operations of a resource theory for contextuality; and it measures quantifiable advantages in informatic tasks, such as games and a form of measurement-based quantum computing.

  6. Minimal entropy approximation for cellular automata

    NASA Astrophysics Data System (ADS)

    Fukś, Henryk

    2014-02-01

    We present a method for the construction of approximate orbits of measures under the action of cellular automata which is complementary to the local structure theory. The local structure theory is based on the idea of Bayesian extension, that is, construction of a probability measure consistent with given block probabilities and maximizing entropy. If instead of maximizing entropy one minimizes it, one can develop another method for the construction of approximate orbits, at the heart of which is the iteration of finite-dimensional maps, called minimal entropy maps. We present numerical evidence that the minimal entropy approximation sometimes outperforms the local structure theory in characterizing the properties of cellular automata. The density response curve for elementary CA rule 26 is used to illustrate this claim.

  7. Improving suicide risk assessment in a managed-care environment.

    PubMed

    Brown, G S; Jones, Edward R; Betts, Ellen; Wu, Jingyang

    2003-01-01

    This article describes the quality improvement intervention of a managed behavioral healthcare company to improve the quality of suicide risk assessments by its panel of providers. At-risk cases are identified by the patient's self-reported high frequency of suicidal ideation on a standardized outcome measure. Clinicians also assess severity of suicidal ideation based on clinical interviews. The clinician's assessment is identified as probably erroneous if the patient report indicates a high frequency of suicidal ideation and the clinicians assessment of suicidal ideation is none. The article describes the methods used to encourage clinicians to utilize information from the patient self-report measure as part of the clinical assessment. Probable suicidal ideation assessment errors were subsequently reduced by 29% over a 1-year period of administration.

  8. Forecasting a winner for Malaysian Cup 2013 using soccer simulation model

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Fauzee, Mohd Soffian Omar; Latif, Rozita Abdul

    2014-07-01

    This paper investigates through soccer simulation the calculation of the probability for each team winning Malaysia Cup 2013. Our methodology used here is we predict the outcomes of individual matches and then we simulate the Malaysia Cup 2013 tournament 5000 times. As match outcomes are always a matter of uncertainty, statistical model, in particular a double Poisson model is used to predict the number of goals scored and conceded for each team. Maximum likelihood estimation is use to measure the attacking strength and defensive weakness for each team. Based on our simulation result, LionXII has a higher probability in becoming the winner, followed by Selangor, ATM, JDT and Kelantan. Meanwhile, T-Team, Negeri Sembilan and Felda United have lower probabilities to win Malaysia Cup 2013. In summary, we find that the probability for each team becominga winner is small, indicating that the level of competitive balance in Malaysia Cup 2013 is quite high.

  9. Probabilistic Nowcasting of Low-Visibility Procedure States at Vienna International Airport During Cold Season

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian J.; Mayr, Georg J.; Zeileis, Achim

    2018-04-01

    Airport operations are sensitive to visibility conditions. Low-visibility events may lead to capacity reduction, delays and economic losses. Different levels of low-visibility procedures (lvp) are enacted to ensure aviation safety. A nowcast of the probabilities for each of the lvp categories helps decision makers to optimally schedule their operations. An ordered logistic regression (OLR) model is used to forecast these probabilities directly. It is applied to cold season forecasts at Vienna International Airport for lead times of 30-min out to 2 h. Model inputs are standard meteorological measurements. The skill of the forecasts is accessed by the ranked probability score. OLR outperforms persistence, which is a strong contender at the shortest lead times. The ranked probability score of the OLR is even better than the one of nowcasts from human forecasters. The OLR-based nowcasting system is computationally fast and can be updated instantaneously when new data become available.

  10. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks.

    PubMed

    He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei

    2014-01-01

    Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.

  11. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277

  12. Measurement of Plutonium-240 Angular Momentum Dependent Fission Probabilities Using the Alpha-Alpha' Reaction

    NASA Astrophysics Data System (ADS)

    Koglin, Johnathon

    Accurate nuclear reaction data from a few keV to tens of MeV and across the table of nuclides is essential to a number of applications of nuclear physics, including national security, nuclear forensics, nuclear astrophysics, and nuclear energy. Precise determination of (n, f) and neutron capture cross sections for reactions in high- ux environments are particularly important for a proper understanding of nuclear reactor performance and stellar nucleosynthesis. In these extreme environments reactions on short-lived and otherwise difficult-to-produce isotopes play a significant role in system evolution and provide insights into the types of nuclear processes taking place; a detailed understanding of these processes is necessary to properly determine cross sections far from stability. Indirect methods are often attempted to measure cross sections on isotopes that are difficult to separate in a laboratory setting. Using the surrogate approach, the same compound nucleus from the reaction of interest is created through a "surrogate" reaction on a different isotope and the resulting decay is measured. This result is combined with appropriate reaction theory for compound nucleus population, from which the desired cross sections can be inferred. This method has shown promise, but the theoretical framework often lacks necessary experimental data to constrain models. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(alpha, alpha'f) reaction - a surrogate for the 239Pu(n, f) - and fission of 35.9(2)MeV at eleven scattering angles from 40° to 140° in 10° intervals and at nuclear excitation energies up to 16MeV. Within experimental uncertainty, the maximum fission probability was observed at the neutron separation energy for each alpha scattering angle. Fission probabilities were separated into five 500 keV bins from 5:5MeV to 8:0MeV and one bin from 4:5MeV to 5:5MeV. Across energy bins the fission probability increases approximately linearly with increasing alpha' scattering angle. At 90° the fission probability increases from 0:069(6) in the lowest energy bin to 0:59(2) in the highest. Likewise, within a single energy bin the fission probability increases with alpha' scattering angle. Within the 6:5MeV and 7:0MeV energy bin, the fission probability increased from 0:41(1) at 60° to 0:81(10) at 140°. Fission fragment angular distributions were also measured integrated over each energy bin. These distributions were fit to theoretical distributions based on combinations of transitional nuclear vibrational and rotational excitations at the saddle point. Contributions from specific K vibrational states were extracted and combined with fission probability measurements to determine the relative fission probability of each state as a function of nuclear excitation energy. Within a given excitation energy bin, it is found that contributions from K states greater than the minimum K = 0 state tend to increase with the increasing alpha' scattering angle. This is attributed to an increase in the transferred angular momentum associated with larger scattering angles. The 90° alpha' scattering angle produced the highest quality results. The relative contributions of K states do not show a discernible trend across the energy spectrum. The energy-binned results confirm existing measurements that place a K = 2 state in the first energy bin with the opening of K = 1 and K = 4 states at energies above 5:5MeV. This experiment represents the first of its kind in which fission probabilities and angular distributions are simultaneously measured at a large number of scattering angles. The acquired fission probability, angular distribution, and K state contribution provide a diverse dataset against which microscopic fission models can be constrained and further the understanding of the properties of the 240Pu fission.

  13. Interpreting null results from measurements with uncertain correlations: an info-gap approach.

    PubMed

    Ben-Haim, Yakov

    2011-01-01

    Null events—not detecting a pernicious agent—are the basis for declaring the agent is absent. Repeated nulls strengthen confidence in the declaration. However, correlations between observations are difficult to assess in many situations and introduce uncertainty in interpreting repeated nulls. We quantify uncertain correlations using an info-gap model, which is an unbounded family of nested sets of possible probabilities. An info-gap model is nonprobabilistic and entails no assumption about a worst case. We then evaluate the robustness, to uncertain correlations, of estimates of the probability of a null event. This is then the basis for evaluating a nonprobabilistic robustness-based confidence interval for the probability of a null. © 2010 Society for Risk Analysis.

  14. Prediction of betavoltaic battery output parameters based on SEM measurements and Monte Carlo simulation.

    PubMed

    Yakimov, Eugene B

    2016-06-01

    An approach for a prediction of (63)Ni-based betavoltaic battery output parameters is described. It consists of multilayer Monte Carlo simulation to obtain the depth dependence of excess carrier generation rate inside the semiconductor converter, a determination of collection probability based on the electron beam induced current measurements, a calculation of current induced in the semiconductor converter by beta-radiation, and SEM measurements of output parameters using the calculated induced current value. Such approach allows to predict the betavoltaic battery parameters and optimize the converter design for any real semiconductor structure and any thickness and specific activity of beta-radiation source. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Geospatial tools effectively estimate nonexceedance probabilities of daily streamflow at ungauged and intermittently gauged locations in Ohio

    USGS Publications Warehouse

    Farmer, William H.; Koltun, Greg

    2017-01-01

    Study regionThe state of Ohio in the United States, a humid, continental climate.Study focusThe estimation of nonexceedance probabilities of daily streamflows as an alternative means of establishing the relative magnitudes of streamflows associated with hydrologic and water-quality observations.New hydrological insights for the regionSeveral methods for estimating nonexceedance probabilities of daily mean streamflows are explored, including single-index methodologies (nearest-neighboring index) and geospatial tools (kriging and topological kriging). These methods were evaluated by conducting leave-one-out cross-validations based on analyses of nearly 7 years of daily streamflow data from 79 unregulated streamgages in Ohio and neighboring states. The pooled, ordinary kriging model, with a median Nash–Sutcliffe performance of 0.87, was superior to the single-site index methods, though there was some bias in the tails of the probability distribution. Incorporating network structure through topological kriging did not improve performance. The pooled, ordinary kriging model was applied to 118 locations without systematic streamgaging across Ohio where instantaneous streamflow measurements had been made concurrent with water-quality sampling on at least 3 separate days. Spearman rank correlations between estimated nonexceedance probabilities and measured streamflows were high, with a median value of 0.76. In consideration of application, the degree of regulation in a set of sample sites helped to specify the streamgages required to implement kriging approaches successfully.

  16. Computing Real-time Streamflow Using Emerging Technologies: Non-contact Radars and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, J. W.; Bjerklie, D. M.; Jones, J. W.; Minear, J. T.

    2015-12-01

    Measuring streamflow, developing, and maintaining rating curves at new streamgaging stations is both time-consuming and problematic. Hydro 21 was an initiative by the U.S. Geological Survey to provide vision and leadership to identify and evaluate new technologies and methods that had the potential to change the way in which streamgaging is conducted. Since 2014, additional trials have been conducted to evaluate some of the methods promoted by the Hydro 21 Committee. Emerging technologies such as continuous-wave radars and computationally-efficient methods such as the Probability Concept require significantly less field time, promote real-time velocity and streamflow measurements, and apply to unsteady flow conditions such as looped ratings and unsteady-flood flows. Portable and fixed-mount radars have advanced beyond the development phase, are cost effective, and readily available in the marketplace. The Probability Concept is based on an alternative velocity-distribution equation developed by C.-L. Chiu, who pioneered the concept. By measuring the surface-water velocity and correcting for environmental influences such as wind drift, radars offer a reliable alternative for measuring and computing real-time streamflow for a variety of hydraulic conditions. If successful, these tools may allow us to establish ratings more efficiently, assess unsteady flow conditions, and report real-time streamflow at new streamgaging stations.

  17. Discharge rate measurements for Micromegas detectors in the presence of a longitudinal magnetic field

    NASA Astrophysics Data System (ADS)

    Moreno, B.; Aune, S.; Ball, J.; Charles, G.; Giganon, A.; Konczykowski, P.; Lahonde-Hamdoun, C.; Moutarde, H.; Procureur, S.; Sabatié, F.

    2011-10-01

    We present first discharge rate measurements for Micromegas detectors in the presence of a high longitudinal magnetic field in the GeV kinematical region. Measurements were performed by using two Micromegas detectors and a photon beam impinging a CH 2 target in the Hall B of the Jefferson Laboratory. One detector was equipped with an additional GEM foil, and a reduction of the discharge probability by two orders of magnitude compared to the stand-alone Micromegas was observed. The detectors were placed in the FROST solenoid providing a longitudinal magnetic field up to 5 T. It allowed for precise measurements of the discharge probability dependence with a diffusion-reducing magnetic field. Between 0 and 5 T, the discharge probability increased by a factor of 10 for polar angles between 19° and 34°. A GEANT4-based simulation developed for sparking rate calculation was calibrated against these data in order to predict the sparking rate in a high longitudinal magnetic field environment. This simulation is then used to investigate the possible use of Micromegas in the Forward Vertex Tracker (FVT) of the future CLAS12 spectrometer. In the case of the FVT a sparking rate of 1 Hz per detector was obtained at the anticipated CLAS12 luminosity.

  18. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  19. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  20. LightForce Photon-Pressure Collision Avoidance: Efficiency Assessment on an Entire Catalogue of Space Debris

    NASA Technical Reports Server (NTRS)

    Stupl, Jan Michael; Faber, Nicolas; Foster, Cyrus; Yang Yang, Fan; Levit, Creon

    2013-01-01

    The potential to perturb debris orbits using photon pressure from ground-based lasers has been confirmed by independent research teams. Two useful applications of this scheme are protecting space assets from impacts with debris and stabilizing the orbital debris environment, both relying on collision avoidance rather than de-orbiting debris. This paper presents the results of a new assessment method to analyze the efficiency of the concept for collision avoidance. Earlier research concluded that one ground based system consisting of a 10 kW class laser, directed by a 1.5 m telescope with adaptive optics, can prevent a significant fraction of debris-debris collisions in low Earth orbit. That research used in-track displacement to measure efficiency and restricted itself to an analysis of a limited number of objects. As orbit prediction error is dependent on debris object properties, a static displacement threshold should be complemented with another measure to assess the efficiency of the scheme. In this paper we present the results of an approach using probability of collision. Using a least-squares fitting method, we improve the quality of the original TLE catalogue in terms of state and co-state accuracy. We then calculate collision probabilities for all the objects in the catalogue. The conjunctions with the highest risk of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the collision probability in a 20 minute window around the original conjunction. We then use different criteria to evaluate the utility of the laser-based collision avoidance scheme and assess the number of base-line ground stations needed to mitigate a significant number of high probability conjunctions. Finally, we also give an account how a laser ground station can be used for both orbit deflection and debris tracking.

  1. A population-based tissue probability map-driven level set method for fully automated mammographic density estimations.

    PubMed

    Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo

    2014-07-01

    A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.

  2. Discrimination of coherent features in turbulent boundary layers by the entropy method

    NASA Technical Reports Server (NTRS)

    Corke, T. C.; Guezennec, Y. G.

    1984-01-01

    Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.

  3. Administrative data measured surgical site infection probability within 30 days of surgery in elderly patients.

    PubMed

    van Walraven, Carl; Jackson, Timothy D; Daneman, Nick

    2016-09-01

    Elderly patients are inordinately affected by surgical site infections (SSIs). This study derived and internally validated a model that used routinely collected health administrative data to measure the probability of SSI in elderly patients within 30 days of surgery. All people exceeding 65 years undergoing surgery from two hospitals with known SSI status were linked to population-based administrative data sets in Ontario, Canada. We used bootstrap methods to create a multivariate model that used health administrative data to predict the probability of SSI. Of 3,436 patients, 177 (5.1%) had an SSI. The Elderly SSI Risk Model included six covariates: number of distinct physician fee codes within 30 days of surgery; presence or absence of a postdischarge prescription for an antibiotic; presence or absence of three diagnostic codes; and a previously derived score that gauged SSI risk based on procedure codes. The model was highly explanatory (Nagelkerke's R 2 , 0.458), strongly discriminative (C statistic, 0.918), and well calibrated (calibration slope, 1). Health administrative data can effectively determine 30-day risk of SSI risk in elderly patients undergoing a broad assortment of surgeries. External validation is necessary before this can be routinely used to monitor SSIs in the elderly. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution

    PubMed Central

    Dinov, Ivo D.; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2014-01-01

    Summary Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students’ understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference. PMID:25419016

  5. Technology-enhanced Interactive Teaching of Marginal, Joint and Conditional Probabilities: The Special Case of Bivariate Normal Distribution.

    PubMed

    Dinov, Ivo D; Kamino, Scott; Bhakhrani, Bilal; Christou, Nicolas

    2013-01-01

    Data analysis requires subtle probability reasoning to answer questions like What is the chance of event A occurring, given that event B was observed? This generic question arises in discussions of many intriguing scientific questions such as What is the probability that an adolescent weighs between 120 and 140 pounds given that they are of average height? and What is the probability of (monetary) inflation exceeding 4% and housing price index below 110? To address such problems, learning some applied, theoretical or cross-disciplinary probability concepts is necessary. Teaching such courses can be improved by utilizing modern information technology resources. Students' understanding of multivariate distributions, conditional probabilities, correlation and causation can be significantly strengthened by employing interactive web-based science educational resources. Independent of the type of a probability course (e.g. majors, minors or service probability course, rigorous measure-theoretic, applied or statistics course) student motivation, learning experiences and knowledge retention may be enhanced by blending modern technological tools within the classical conceptual pedagogical models. We have designed, implemented and disseminated a portable open-source web-application for teaching multivariate distributions, marginal, joint and conditional probabilities using the special case of bivariate Normal distribution. A real adolescent height and weight dataset is used to demonstrate the classroom utilization of the new web-application to address problems of parameter estimation, univariate and multivariate inference.

  6. Quantum probability rule: a generalization of the theorems of Gleason and Busch

    NASA Astrophysics Data System (ADS)

    Barnett, Stephen M.; Cresser, James D.; Jeffers, John; Pegg, David T.

    2014-04-01

    Busch's theorem deriving the standard quantum probability rule can be regarded as a more general form of Gleason's theorem. Here we show that a further generalization is possible by reducing the number of quantum postulates used by Busch. We do not assume that the positive measurement outcome operators are effects or that they form a probability operator measure. We derive a more general probability rule from which the standard rule can be obtained from the normal laws of probability when there is no measurement outcome information available, without the need for further quantum postulates. Our general probability rule has prediction-retrodiction symmetry and we show how it may be applied in quantum communications and in retrodictive quantum theory.

  7. Machine Learning-based Individual Assessment of Cortical Atrophy Pattern in Alzheimer's Disease Spectrum: Development of the Classifier and Longitudinal Evaluation.

    PubMed

    Lee, Jin San; Kim, Changsoo; Shin, Jeong-Hyeon; Cho, Hanna; Shin, Dae-Seock; Kim, Nakyoung; Kim, Hee Jin; Kim, Yeshin; Lockhart, Samuel N; Na, Duk L; Seo, Sang Won; Seong, Joon-Kyung

    2018-03-07

    To develop a new method for measuring Alzheimer's disease (AD)-specific similarity of cortical atrophy patterns at the individual-level, we employed an individual-level machine learning algorithm. A total of 869 cognitively normal (CN) individuals and 473 patients with probable AD dementia who underwent high-resolution 3T brain MRI were included. We propose a machine learning-based method for measuring the similarity of an individual subject's cortical atrophy pattern with that of a representative AD patient cohort. In addition, we validated this similarity measure in two longitudinal cohorts consisting of 79 patients with amnestic-mild cognitive impairment (aMCI) and 27 patients with probable AD dementia. Surface-based morphometry classifier for discriminating AD from CN showed sensitivity and specificity values of 87.1% and 93.3%, respectively. In the longitudinal validation study, aMCI-converts had higher atrophy similarity at both baseline (p < 0.001) and first year visits (p < 0.001) relative to non-converters. Similarly, AD patients with faster decline had higher atrophy similarity than slower decliners at baseline (p = 0.042), first year (p = 0.028), and third year visits (p = 0.027). The AD-specific atrophy similarity measure is a novel approach for the prediction of dementia risk and for the evaluation of AD trajectories on an individual subject level.

  8. Delineating Hydrofacies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Xuehang; Chen, Xingyuan; Ye, Ming

    2015-07-01

    This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less

  9. Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle

    PubMed Central

    Frost, Anja; Renners, Eike; Hötter, Michael; Ostermann, Jörn

    2013-01-01

    An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction. PMID:23344378

  10. Development of a clinical prediction model to calculate patient life expectancy: the measure of actuarial life expectancy (MALE).

    PubMed

    Clarke, M G; Kennedy, K P; MacDonagh, R P

    2009-01-01

    To develop a clinical prediction model enabling the calculation of an individual patient's life expectancy (LE) and survival probability based on age, sex, and comorbidity for use in the joint decision-making process regarding medical treatment. A computer software program was developed with a team of 3 clinicians, 2 professional actuaries, and 2 professional computer programmers. This incorporated statistical spreadsheet and database access design methods. Data sources included life insurance industry actuarial rating factor tables (public and private domain), Government Actuary Department UK life tables, professional actuarial sources, and evidence-based medical literature. The main outcome measures were numerical and graphical display of comorbidity-adjusted LE; 5-, 10-, and 15-year survival probability; in addition to generic UK population LE. Nineteen medical conditions, which impacted significantly on LE in actuarial terms and were commonly encountered in clinical practice, were incorporated in the final model. Numerical and graphical representations of statistical predictions of LE and survival probability were successfully generated for patients with either no comorbidity or a combination of the 19 medical conditions included. Validation and testing, including actuarial peer review, confirmed consistency with the data sources utilized. The evidence-based actuarial data utilized in this computer program design represent a valuable resource for use in the clinical decision-making process, where an accurate objective assessment of patient LE can so often make the difference between patients being offered or denied medical and surgical treatment. Ongoing development to incorporate additional comorbidities and enable Web-based access will enhance its use further.

  11. Superior model for fault tolerance computation in designing nano-sized circuit systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my

    2014-10-24

    As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less

  12. Probability-Based Inference in a Domain of Proportional Reasoning Tasks

    DTIC Science & Technology

    1992-01-01

    Embretson, S.E. (1985). Multicomponent latent trait models for test design. In SE. Embretson (Ed.), Test design: Developments in psychology and...J.C. Conoley, & J. Witt (Eds.), The influence of cognitive psychology on testing and measurement: The Buros-Nebraska Symposium on measurement and...Mislevy Educational Testing Service This research was sponsored in part by the Cognitive Science Program Cognitive and Neural Sciences Division Office of

  13. Consistency of extreme flood estimation approaches

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf

    2017-04-01

    Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.

  14. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  15. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-07-26

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  16. Infrared-fiber-optic fire sensor developments - Role of measurement uncertainty in evaluation of background limited range. [in spacecraft safety

    NASA Technical Reports Server (NTRS)

    Tapphorn, Ralph M.; Kays, Randy; Porter, Alan

    1989-01-01

    Fire-detector systems based on distributed infrared fiber-sensors have been investigated for potential applications in the aerospace industry. Responsivities to blackbody and flame radiations were measured with various design configurations of an infrared fiber-optic sensor. Signal processing techniques were also investigated, and the results show significant differences in the fire-sensor performance depending on the design configuration. Measurement uncertainties were used to determine the background-limited ranges for the various fire-sensor concepts, and the probability of producing false alarms caused by fluctuations in the background signals were determined using extreme probability theory. The results of the research show that infrared fiber-optic fire sensors are feasible for application on manned spacecraft; however, additional development work will be required to eliminate false alarms caused by high temperature objects such as incandescent lamps.

  17. Regional SAR Image Segmentation Based on Fuzzy Clustering with Gamma Mixture Model

    NASA Astrophysics Data System (ADS)

    Li, X. L.; Zhao, Q. H.; Li, Y.

    2017-09-01

    Most of stochastic based fuzzy clustering algorithms are pixel-based, which can not effectively overcome the inherent speckle noise in SAR images. In order to deal with the problem, a regional SAR image segmentation algorithm based on fuzzy clustering with Gamma mixture model is proposed in this paper. First, initialize some generating points randomly on the image, the image domain is divided into many sub-regions using Voronoi tessellation technique. Each sub-region is regarded as a homogeneous area in which the pixels share the same cluster label. Then, assume the probability of the pixel to be a Gamma mixture model with the parameters respecting to the cluster which the pixel belongs to. The negative logarithm of the probability represents the dissimilarity measure between the pixel and the cluster. The regional dissimilarity measure of one sub-region is defined as the sum of the measures of pixels in the region. Furthermore, the Markov Random Field (MRF) model is extended from pixels level to Voronoi sub-regions, and then the regional objective function is established under the framework of fuzzy clustering. The optimal segmentation results can be obtained by the solution of model parameters and generating points. Finally, the effectiveness of the proposed algorithm can be proved by the qualitative and quantitative analysis from the segmentation results of the simulated and real SAR images.

  18. Unified framework for information integration based on information geometry

    PubMed Central

    Oizumi, Masafumi; Amari, Shun-ichi

    2016-01-01

    Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner. PMID:27930289

  19. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    PubMed

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk < 1.0 as "incapable" (1). A C pk > 1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  20. A self-analysis of the NASA-TLX workload measure.

    PubMed

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  1. Probability in the Many-Worlds Interpretation of Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Vaidman, Lev

    It is argued that, although in the Many-Worlds Interpretation of quantum mechanics there is no "probability" for an outcome of a quantum experiment in the usual sense, we can understand why we have an illusion of probability. The explanation involves: (a) A "sleeping pill" gedanken experiment which makes correspondence between an illegitimate question: "What is the probability of an outcome of a quantum measurement?" with a legitimate question: "What is the probability that `I' am in the world corresponding to that outcome?"; (b) A gedanken experiment which splits the world into several worlds which are identical according to some symmetry condition; and (c) Relativistic causality, which together with (b) explain the Born rule of standard quantum mechanics. The Quantum Sleeping Beauty controversy and "caring measure" replacing probability measure are discussed.

  2. Conditional Probabilities and Collapse in Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Laura, Roberto; Vanni, Leonardo

    2008-09-01

    We show that including both the system and the apparatus in the quantum description of the measurement process, and using the concept of conditional probabilities, it is possible to deduce the statistical operator of the system after a measurement with a given result, which gives the probability distribution for all possible consecutive measurements on the system. This statistical operator, representing the state of the system after the first measurement, is in general not the same that would be obtained using the postulate of collapse.

  3. A Student’s t Mixture Probability Hypothesis Density Filter for Multi-Target Tracking with Outliers

    PubMed Central

    Liu, Zhuowei; Chen, Shuxin; Wu, Hao; He, Renke; Hao, Lin

    2018-01-01

    In multi-target tracking, the outliers-corrupted process and measurement noises can reduce the performance of the probability hypothesis density (PHD) filter severely. To solve the problem, this paper proposed a novel PHD filter, called Student’s t mixture PHD (STM-PHD) filter. The proposed filter models the heavy-tailed process noise and measurement noise as a Student’s t distribution as well as approximates the multi-target intensity as a mixture of Student’s t components to be propagated in time. Then, a closed PHD recursion is obtained based on Student’s t approximation. Our approach can make full use of the heavy-tailed characteristic of a Student’s t distribution to handle the situations with heavy-tailed process and the measurement noises. The simulation results verify that the proposed filter can overcome the negative effect generated by outliers and maintain a good tracking accuracy in the simultaneous presence of process and measurement outliers. PMID:29617348

  4. Detection of laryngeal function using speech and electroglottographic data.

    PubMed

    Childers, D G; Bae, K S

    1992-01-01

    The purpose of this research was to develop quantitative measures for the assessment of laryngeal function using speech and electroglottographic (EGG) data. We developed two procedures for the detection of laryngeal pathology: 1) a spectral distortion measure using pitch synchronous and asynchronous methods with linear predictive coding (LPC) vectors and vector quantization (VQ) and 2) analysis of the EGG signal using time interval and amplitude difference measures. The VQ procedure was conjectured to offer the possibility of circumventing the need to estimate the glottal volume velocity wave-form by inverse filtering techniques. The EGG procedure was to evaluate data that was "nearly" a direct measure of vocal fold vibratory motion and thus was conjectured to offer the potential for providing an excellent assessment of laryngeal function. A threshold based procedure gave 75.9 and 69.0% probability of pathological detection using procedures 1) and 2), respectively, for 29 patients with pathological voices and 52 normal subjects. The false alarm probability was 9.6% for the normal subjects.

  5. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  6. Gas Hydrate Formation Probability Distributions: The Effect of Shear and Comparisons with Nucleation Theory.

    PubMed

    May, Eric F; Lim, Vincent W; Metaxas, Peter J; Du, Jianwei; Stanwix, Paul L; Rowland, Darren; Johns, Michael L; Haandrikman, Gert; Crosby, Daniel; Aman, Zachary M

    2018-03-13

    Gas hydrate formation is a stochastic phenomenon of considerable significance for any risk-based approach to flow assurance in the oil and gas industry. In principle, well-established results from nucleation theory offer the prospect of predictive models for hydrate formation probability in industrial production systems. In practice, however, heuristics are relied on when estimating formation risk for a given flowline subcooling or when quantifying kinetic hydrate inhibitor (KHI) performance. Here, we present statistically significant measurements of formation probability distributions for natural gas hydrate systems under shear, which are quantitatively compared with theoretical predictions. Distributions with over 100 points were generated using low-mass, Peltier-cooled pressure cells, cycled in temperature between 40 and -5 °C at up to 2 K·min -1 and analyzed with robust algorithms that automatically identify hydrate formation and initial growth rates from dynamic pressure data. The application of shear had a significant influence on the measured distributions: at 700 rpm mass-transfer limitations were minimal, as demonstrated by the kinetic growth rates observed. The formation probability distributions measured at this shear rate had mean subcoolings consistent with theoretical predictions and steel-hydrate-water contact angles of 14-26°. However, the experimental distributions were substantially wider than predicted, suggesting that phenomena acting on macroscopic length scales are responsible for much of the observed stochastic formation. Performance tests of a KHI provided new insights into how such chemicals can reduce the risk of hydrate blockage in flowlines. Our data demonstrate that the KHI not only reduces the probability of formation (by both shifting and sharpening the distribution) but also reduces hydrate growth rates by a factor of 2.

  7. Nonadhesive, silica nanoparticles-based brush-coated contact lens cases--compromising between ease of cleaning and microbial transmission to contact lenses.

    PubMed

    Qu, Wenwen; Hooymans, Johanna M M; Qiu, Jun; de-Bont, Nik; Gelling, Onko-Jan; van der Mei, Henny C; Busscher, Henk J

    2013-05-01

    Surface properties of lens cases are determinant for their cleanability and for microbial transmission from lens cases to contact lenses (CLs). PEG-polymer-brush-coatings are known to decrease microbial adhesion more than other surface-coatings. Here, we applied a robust, silica nanoparticles-based brush-coating to polypropylene cases to evaluate their ease of cleaning and probability of bacterial transmission to CLs. Adhesion forces of nine bacterial strains (Pseudomonas, Staphylococci, and Serratia) to rigid CLs, polypropylene, and silica nanoparticles-based brush-coated polypropylene were measured using atomic-force-microscopy and subjected to Weibull analyses to yield bacterial transmission probabilities. Biofilms of each strain were grown in coated and uncoated cases and rinsed with a NaCl or antimicrobial lens care solution. Residual, viable organisms were quantified. Bacterial adhesion forces of all strains were significantly, up to tenfold smaller on brush-coated than on uncoated polypropylene. This yielded, higher transmission probabilities to a CL, but mild-rinsing yielded 10-100 fold higher removal of bacteria from brush-coated than from polypropylene cases. Moreover, due to weak adhesion forces, bacteria on brush-coated cases were two-to-three fold more susceptible to an antimicrobial lens care solution than on polypropylene cases. Therewith, the design of lens case surfaces is a compromise between ease of cleaning and transmission probability to CLs. Copyright © 2013 Wiley Periodicals, Inc.

  8. Using entropy measures to characterize human locomotion.

    PubMed

    Leverick, Graham; Szturm, Tony; Wu, Christine Q

    2014-12-01

    Entropy measures have been widely used to quantify the complexity of theoretical and experimental dynamical systems. In this paper, the value of using entropy measures to characterize human locomotion is demonstrated based on their construct validity, predictive validity in a simple model of human walking and convergent validity in an experimental study. Results show that four of the five considered entropy measures increase meaningfully with the increased probability of falling in a simple passive bipedal walker model. The same four entropy measures also experienced statistically significant increases in response to increasing age and gait impairment caused by cognitive interference in an experimental study. Of the considered entropy measures, the proposed quantized dynamical entropy (QDE) and quantization-based approximation of sample entropy (QASE) offered the best combination of sensitivity to changes in gait dynamics and computational efficiency. Based on these results, entropy appears to be a viable candidate for assessing the stability of human locomotion.

  9. Comparison of sticking probabilities of metal atoms in magnetron sputtering deposition of CuZnSnS films

    NASA Astrophysics Data System (ADS)

    Sasaki, K.; Kikuchi, S.

    2014-10-01

    In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.

  10. [What determines the participation in stepwise occupational reintegration on behalf of the German pension insurance? Results of the "SOR cohort study"].

    PubMed

    Bürger, W; Streibelt, M

    2015-02-01

    Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Messier 35 (NGC 2168) DANCe. I. Membership, proper motions, and multiwavelength photometry

    NASA Astrophysics Data System (ADS)

    Bouy, H.; Bertin, E.; Barrado, D.; Sarro, L. M.; Olivares, J.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Ribas, Á.; Beletsky, Y.

    2015-03-01

    Context. Messier 35 (NGC 2168) is an important young nearby cluster. Its age, richness and relative proximity make it an ideal target for stellar evolution studies. The Kepler K2 mission recently observed it and provided a high accuracy photometric time series of a large number of sources in this area of the sky. Identifying the cluster's members is therefore of high importance to optimize the interpretation and analysis of the Kepler K2 data. Aims: We aim to identify the cluster's members by deriving membership probabilities for the sources within 1° of the cluster's center, which is farther away than equivalent previous studies. Methods: We measure accurate proper motions and multiwavelength (optical and near-infrared) photometry using ground-based archival images of the cluster. We use these measurements to compute membership probabilities. The list of candidate members from the literature is used as a training set to identify the cluster's locus in a multidimensional space made of proper motions, luminosities, and colors. Results: The final catalog includes 338 892 sources with multiwavelength photometry. Approximately half (194 452) were detected at more than two epochs and we measured their proper motion and used it to derive membership probability. A total of 4349 candidate members with membership probabilities greater than 50% are found in this sample in the luminosity range between 10 mag and 22 mag. The slow proper motion of the cluster and the overlap of its sequence with the field and background sequences in almost all color-magnitude and color-color diagrams complicate the analysis and the contamination level is expected to be significant. Our study, nevertheless, provides a coherent and quantitative membership analysis of Messier 35 based on a large fraction of the best ground-based data sets obtained over the past 18 years. As such, it represents a valuable input for follow-up studies using, in particular, the Kepler K2 photometric time series. Table 3 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/575/A120

  12. Automated analysis of flow cytometric data for measuring neutrophil CD64 expression using a multi-instrument compatible probability state model.

    PubMed

    Wong, Linda; Hill, Beth L; Hunsberger, Benjamin C; Bagwell, C Bruce; Curtis, Adam D; Davis, Bruce H

    2015-01-01

    Leuko64™ (Trillium Diagnostics) is a flow cytometric assay that measures neutrophil CD64 expression and serves as an in vitro indicator of infection/sepsis or the presence of a systemic acute inflammatory response. Leuko64 assay currently utilizes QuantiCALC, a semiautomated software that employs cluster algorithms to define cell populations. The software reduces subjective gating decisions, resulting in interanalyst variability of <5%. We evaluated a completely automated approach to measuring neutrophil CD64 expression using GemStone™ (Verity Software House) and probability state modeling (PSM). Four hundred and fifty-seven human blood samples were processed using the Leuko64 assay. Samples were analyzed on four different flow cytometer models: BD FACSCanto II, BD FACScan, BC Gallios/Navios, and BC FC500. A probability state model was designed to identify calibration beads and three leukocyte subpopulations based on differences in intensity levels of several parameters. PSM automatically calculates CD64 index values for each cell population using equations programmed into the model. GemStone software uses PSM that requires no operator intervention, thus totally automating data analysis and internal quality control flagging. Expert analysis with the predicate method (QuantiCALC) was performed. Interanalyst precision was evaluated for both methods of data analysis. PSM with GemStone correlates well with the expert manual analysis, r(2) = 0.99675 for the neutrophil CD64 index values with no intermethod bias detected. The average interanalyst imprecision for the QuantiCALC method was 1.06% (range 0.00-7.94%), which was reduced to 0.00% with the GemStone PSM. The operator-to-operator agreement in GemStone was a perfect correlation, r(2) = 1.000. Automated quantification of CD64 index values produced results that strongly correlate with expert analysis using a standard gate-based data analysis method. PSM successfully evaluated flow cytometric data generated by multiple instruments across multiple lots of the Leuko64 kit in all 457 cases. The probability-based method provides greater objectivity, higher data analysis speed, and allows for greater precision for in vitro diagnostic flow cytometric assays. © 2015 International Clinical Cytometry Society.

  13. Usefulness of antigen-specific IgE probability curves derived from the 3gAllergy assay in diagnosing egg, cow's milk, and wheat allergies.

    PubMed

    Sato, Sakura; Ogura, Kiyotake; Takahashi, Kyohei; Sato, Yasunori; Yanagida, Noriyuki; Ebisawa, Motohiro

    2017-04-01

    Specific IgE (sIgE) antibody detection using the Siemens IMMULITE ® 3gAllergy™ (3gAllergy) assay have not been sufficiently examined for the diagnosis of food allergy. The aim of this study was to evaluate the utility of measuring sIgE levels using the 3gAllergy assay to diagnose allergic reactions to egg, milk, and wheat. This retrospective study was conducted on patients with diagnosed or suspected allergies to egg, milk and wheat. Patients were divided into two groups according to their clinical reactivity to these allergens based on oral food challenge outcomes and/or convincing histories of immediate reaction to causative food(s). The sIgE levels were measured using 3gAllergy and ImmunoCAP. Predicted probability curves were estimated using logistic regression analysis. We analyzed 1561 patients, ages 0-19 y (egg = 436, milk = 499, wheat = 626). The sIgE levels determined using 3gAllergy correlated with those of ImmunoCAP, classifying 355 patients as symptomatic: egg = 149, milk = 123, wheat = 83. 3gAllergy sIgE levels were significantly higher in symptomatic than in asymptomatic patients (P < 0.0001). Predictive probability for positive food allergy was significantly increased and correlated with increased sIgE levels. The cut-offs for allergic reaction with 95% predictive probability as determined by the 3gAllergy probability curves were different from those of ImmunoCAP. Measurements of sIgE against egg, milk, and wheat as determined by 3gAllergy may be used as a tool to facilitate the diagnosis of food allergy in subjects with suspected food allergies. However, these probability curves should not be applied interchangeably between different assays. Copyright © 2016 Japanese Society of Allergology. Production and hosting by Elsevier B.V. All rights reserved.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Jackson, Andrew

    Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, andmore » 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.« less

  15. Laser based in-situ and standoff detection of chemical warfare agents and explosives

    NASA Astrophysics Data System (ADS)

    Patel, C. Kumar N.

    2009-09-01

    Laser based detection of gaseous, liquid and solid residues and trace amounts has been developed ever since lasers were invented. However, the lack of availability of reasonably high power tunable lasers in the spectral regions where the relevant targets can be interrogated as well as appropriate techniques for high sensitivity, high selectivity detection has hampered the practical exploitation of techniques for the detection of targets important for homeland security and defense applications. Furthermore, emphasis has been on selectivity without particular attention being paid to the impact of interfering species on the quality of detection. Having high sensitivity is necessary but not a sufficient condition. High sensitivity assures a high probability of detection of the target species. However, it is only recently that the sensor community has come to recognize that any measure of probability of detection must be associated with a probability of false alarm, if it is to have any value as a measure of performance. This is especially true when one attempts to compare performance characteristics of different sensors based on different physical principles. In this paper, I will provide a methodology for characterizing the performance of sensors utilizing optical absorption measurement techniques. However, the underlying principles are equally application to all other sensors. While most of the current progress in high sensitivity, high selectivity detection of CWAs, TICs and explosives involve identifying and quantifying the target species in-situ, there is an urgent need for standoff detection of explosives from safe distances. I will describe our results on CO2 and quantum cascade laser (QCL) based photoacoustic sensors for the detection of CWAs, TICs and explosives as well the very new results on stand-off detection of explosives at distances up to 150 meters. The latter results are critically important for assuring safety of military personnel in battlefield environment, especially from improvised explosive devices (IEDs), and of civilian personnel from terrorist attacks in metropolitan areas.

  16. Pathways between Acculturation and Health: Does the Measure Matter?

    ERIC Educational Resources Information Center

    Miranda, Patricia Y.; Gonzalez, Hector M.; Tarraf, Wassim

    2011-01-01

    The purpose of this study was to assess the association between acculturation and functional health using multiple proxies of acculturation to examine explanatory pathways to clarify disparate health findings. A population-based cross-sectional, multistage probability sample from the Hispanic Established Populations for the Epidemiologic Studies…

  17. MRI Brain Tumor Segmentation and Necrosis Detection Using Adaptive Sobolev Snakes.

    PubMed

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-21

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at different points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D diffusion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  18. MRI brain tumor segmentation and necrosis detection using adaptive Sobolev snakes

    NASA Astrophysics Data System (ADS)

    Nakhmani, Arie; Kikinis, Ron; Tannenbaum, Allen

    2014-03-01

    Brain tumor segmentation in brain MRI volumes is used in neurosurgical planning and illness staging. It is important to explore the tumor shape and necrosis regions at di erent points of time to evaluate the disease progression. We propose an algorithm for semi-automatic tumor segmentation and necrosis detection. Our algorithm consists of three parts: conversion of MRI volume to a probability space based on the on-line learned model, tumor probability density estimation, and adaptive segmentation in the probability space. We use manually selected acceptance and rejection classes on a single MRI slice to learn the background and foreground statistical models. Then, we propagate this model to all MRI slices to compute the most probable regions of the tumor. Anisotropic 3D di usion is used to estimate the probability density. Finally, the estimated density is segmented by the Sobolev active contour (snake) algorithm to select smoothed regions of the maximum tumor probability. The segmentation approach is robust to noise and not very sensitive to the manual initialization in the volumes tested. Also, it is appropriate for low contrast imagery. The irregular necrosis regions are detected by using the outliers of the probability distribution inside the segmented region. The necrosis regions of small width are removed due to a high probability of noisy measurements. The MRI volume segmentation results obtained by our algorithm are very similar to expert manual segmentation.

  19. Measurement of the Earth's Radius Based on Historical Evidence of Its Curvature

    ERIC Educational Resources Information Center

    Roura, Pere; Josep, Calbo

    2005-01-01

    Probably the most direct observation of the Earth's curvature is how objects appear from over the horizon when we approach them and disappear as we get further away from them. Similarly, the portion of a high object (a building or a mountain) that is visible depends on the height of the site where the observation is made. Based upon these very…

  20. Participation in fitness-related activities of an incentive-based health promotion program and hospital costs: a retrospective longitudinal study.

    PubMed

    Patel, Deepak; Lambert, Estelle V; da Silva, Roseanne; Greyling, Mike; Kolbe-Alexander, Tracy; Noach, Adam; Conradie, Jaco; Nossel, Craig; Borresen, Jill; Gaziano, Thomas

    2011-01-01

    A retrospective, longitudinal study examined changes in participation in fitness-related activities and hospital claims over 5 years amongst members of an incentivized health promotion program offered by a private health insurer. A 3-year retrospective observational analysis measuring gym visits and participation in documented fitness-related activities, probability of hospital admission, and associated costs of admission. A South African private health plan, Discovery Health and the Vitality health promotion program. 304,054 adult members of the Discovery medical plan, 192,467 of whom registered for the health promotion program and 111,587 members who were not on the program. Members were incentivised for fitness-related activities on the basis of the frequency of gym visits. Changes in electronically documented gym visits and registered participation in fitness-related activities over 3 years and measures of association between changes in participation (years 1-3) and subsequent probability and costs of hospital admission (years 4-5). Hospital admissions and associated costs are based on claims extracted from the health insurer database. The probability of a claim modeled by using linear logistic regression and costs of claims examined by using general linear models. Propensity scores were estimated and included age, gender, registration for chronic disease benefits, plan type, and the presence of a claim during the transition period, and these were used as covariates in the final model. There was a significant decrease in the prevalence of inactive members (76% to 68%) over 5 years. Members who remained highly active (years 1-3) had a lower probability (p < .05) of hospital admission in years 4 to 5 (20.7%) compared with those who remained inactive (22.2%). The odds of admission were 13% lower for two additional gym visits per week (odds ratio, .87; 95% confidence interval [CI], .801-.949). We observed an increase in fitness-related activities over time amongst members of this incentive-based health promotion program, which was associated with a lower probability of hospital admission and lower hospital costs in the subsequent 2 years. Copyright © 2011 by American Journal of Health Promotion, Inc.

  1. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  2. Quantitative analysis of the probability of introducing equine encephalosis virus (EEV) into The Netherlands.

    PubMed

    Fischer, Egil Andreas Joor; Martínez López, Evelyn Pamela; De Vos, Clazien J; Faverjon, Céline

    2016-09-01

    Equine encephalosis is a midge-borne viral disease of equines caused by equine encephalosis virus (EEV, Orbivirus, Reoviridae), and closely related to African horse sickness virus (AHSV). EEV and AHSV share common vectors and show similar transmission patterns. Until now EEV has caused outbreaks in Africa and Israel. This study aimed to provide insight in the probability of an EEV outbreak in The Netherlands caused by infected vectors or hosts, the contribution of potential source areas (risk regions) to this probability, and the effectiveness of preventive measures (sanitary regimes). A stochastic risk model constructed for risk assessment of AHSV introduction was adapted to EEV. Source areas were categorized in risk regions (high, low, and very low risk) based on EEV history and the presence of competent vectors. Two possible EEV introduction pathways were considered: importation of infected equines and importation of infected vectors along with their vertebrate hosts. The probability of EEV introduction (PEEV) was calculated by combining the probability of EEV release by either pathway and the probability of EEV establishment. The median current annual probability of EEV introduction by an infected equine was estimated at 0.012 (90% uncertainty interval 0.002-0.020), and by an infected vector at 4.0 10(-5) (90% uncertainty interval 5.3 10(-6)-2.0 10(-4)). Equines from high risk regions contributed most to the probability of EEV introduction with 74% on the EEV introduction by equines, whereas low and very low risk regions contributed 18% and 8%, respectively. International movements of horses participating in equestrian events contributed most to the probability of EEV introduction by equines from high risk regions (86%), but also contributed substantially for low and very low risk regions with 47% and 56%. The probability of introducing EEV into The Netherlands is much higher than the probability of introducing AHSV with equines from high risk countries contributing most. The introduction by an infected equine is the most likely pathway. Control measures before exportation of equines showed to have a strong mitigating effect on the probability of EEV introduction. The risk of EEV outbreaks should be taken into account when altering these import regulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The load of 10 cm of such tephra fallout on a flat surface would therefore be in the range of 100-125 kg/m2; addition of water from rainfall or snowmelt would provide additional load.

  4. Does the cognitive reflection test measure cognitive reflection? A mathematical modeling approach.

    PubMed

    Campitelli, Guillermo; Gerrans, Paul

    2014-04-01

    We used a mathematical modeling approach, based on a sample of 2,019 participants, to better understand what the cognitive reflection test (CRT; Frederick In Journal of Economic Perspectives, 19, 25-42, 2005) measures. This test, which is typically completed in less than 10 min, contains three problems and aims to measure the ability or disposition to resist reporting the response that first comes to mind. However, since the test contains three mathematically based problems, it is possible that the test only measures mathematical abilities, and not cognitive reflection. We found that the models that included an inhibition parameter (i.e., the probability of inhibiting an intuitive response), as well as a mathematical parameter (i.e., the probability of using an adequate mathematical procedure), fitted the data better than a model that only included a mathematical parameter. We also found that the inhibition parameter in males is best explained by both rational thinking ability and the disposition toward actively open-minded thinking, whereas in females this parameter was better explained by rational thinking only. With these findings, this study contributes to the understanding of the processes involved in solving the CRT, and will be particularly useful for researchers who are considering using this test in their research.

  5. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  6. Development of a Novel, Objective Measure of Health Care–Related Financial Burden for U.S. Families with Children

    PubMed Central

    Wisk, Lauren E; Gangnon, Ronald; Vanness, David J; Galbraith, Alison A; Mullahy, John; Witt, Whitney P

    2014-01-01

    Objective To develop and validate a theoretically based and empirically driven objective measure of financial burden for U.S. families with children. Data Sources The measure was developed using 149,021 families with children from the National Health Interview Survey, and it was validated using 18,488 families with children from the Medical Expenditure Panel Survey. Study Design We estimated the marginal probability of unmet health care need due to cost using a bivariate tensor product spline for family income and out-of-pocket health care costs (OOPC; e.g., deductibles, copayments), while adjusting for confounders. Recursive partitioning was performed on these probabilities, as a function of income and OOPC, to establish thresholds demarcating levels of predicted risk. Principal Findings We successfully generated a novel measure of financial burden with four categories that were associated with unmet need (vs. low burden: midlow OR: 1.93, 95 percent CI: 1.78–2.09; midhigh OR: 2.78, 95 percent CI: 2.49–3.10; high OR: 4.38, 95 percent CI: 3.99–4.80). The novel burden measure demonstrated significantly better model fit and less underestimation of financial burden compared to an existing measure (OOPC/income ≥10 percent). Conclusion The newly developed measure of financial burden establishes thresholds based on different combinations of family income and OOPC that can be applied in future studies of health care utilization and expenditures and in policy development and evaluation. PMID:25328073

  7. [Biometric bases: basic concepts of probability calculation].

    PubMed

    Dinya, E

    1998-04-26

    The author gives or outline of the basic concepts of probability theory. The bases of the event algebra, definition of the probability, the classical probability model and the random variable are presented.

  8. Complementarity and Correlations

    NASA Astrophysics Data System (ADS)

    Maccone, Lorenzo; Bruß, Dagmar; Macchiavello, Chiara

    2015-04-01

    We provide an interpretation of entanglement based on classical correlations between measurement outcomes of complementary properties: States that have correlations beyond a certain threshold are entangled. The reverse is not true, however. We also show that, surprisingly, all separable nonclassical states exhibit smaller correlations for complementary observables than some strictly classical states. We use mutual information as a measure of classical correlations, but we conjecture that the first result holds also for other measures (e.g., the Pearson correlation coefficient or the sum of conditional probabilities).

  9. Developing safety performance functions incorporating reliability-based risk measures.

    PubMed

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Generalization of information-based concepts in forecast verification

    NASA Astrophysics Data System (ADS)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  11. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  12. Measurement of Attenuation with Airborne and Ground-Based Radar in Convective Storms Over Land Its Microphysical Implications

    NASA Technical Reports Server (NTRS)

    Tian, Lin; Heymsfield, G. M.; Srivastava, R. C.; O'C.Starr, D. (Technical Monitor)

    2001-01-01

    Observations by the airborne X-band Doppler radar (EDOP) and the NCAR S-band polarimetric (S-Pol) radar from two field experiments are used to evaluate the surface reference technique (SRT) for measuring the path integrated attenuation (PIA) and to study attenuation in deep convective storms. The EDOP, flying at an altitude of 20 km, uses a nadir beam and a forward pointing beam. It is found that over land, the surface scattering cross-section is highly variable at nadir incidence but relatively stable at forward incidence. It is concluded that measurement by the forward beam provides a viable technique for measuring PIA using the SRT. Vertical profiles of peak attenuation coefficient are derived in two deep convective storms by the dual-wavelength method. Using the measured Doppler velocity, the reflectivities at the two wavelengths, the differential reflectivity and the estimated attenuation coefficients, it is shown that: supercooled drops and (dry) ice particles probably co-existed above the melting level in regions of updraft, that water-coated partially melted ice particles probably contributed to high attenuation below the melting level.

  13. Development of a score and probability estimate for detecting angle closure based on anterior segment optical coherence tomography.

    PubMed

    Nongpiur, Monisha E; Haaland, Benjamin A; Perera, Shamira A; Friedman, David S; He, Mingguang; Sakata, Lisandro M; Baskaran, Mani; Aung, Tin

    2014-01-01

    To develop a score along with an estimated probability of disease for detecting angle closure based on anterior segment optical coherence tomography (AS OCT) imaging. Cross-sectional study. A total of 2047 subjects 50 years of age and older were recruited from a community polyclinic in Singapore. All subjects underwent standardized ocular examination including gonioscopy and imaging by AS OCT (Carl Zeiss Meditec). Customized software (Zhongshan Angle Assessment Program) was used to measure AS OCT parameters. Complete data were available for 1368 subjects. Data from the right eyes were used for analysis. A stepwise logistic regression model with Akaike information criterion was used to generate a score that then was converted to an estimated probability of the presence of gonioscopic angle closure, defined as the inability to visualize the posterior trabecular meshwork for at least 180 degrees on nonindentation gonioscopy. Of the 1368 subjects, 295 (21.6%) had gonioscopic angle closure. The angle closure score was calculated from the shifted linear combination of the AS OCT parameters. The score can be converted to an estimated probability of having angle closure using the relationship: estimated probability = e(score)/(1 + e(score)), where e is the natural exponential. The score performed well in a second independent sample of 178 angle-closure subjects and 301 normal controls, with an area under the receiver operating characteristic curve of 0.94. A score derived from a single AS OCT image, coupled with an estimated probability, provides an objective platform for detection of angle closure. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Memory disorders in probable Alzheimer's disease: the role of hippocampal atrophy as shown with MRI.

    PubMed Central

    Deweer, B; Lehéricy, S; Pillon, B; Baulac, M; Chiras, J; Marsault, C; Agid, Y; Dubois, B

    1995-01-01

    Magnetic resonance based volumetric measures of hippocampal formation, amygdala (A), caudate nucleus (CN), normalised for total intracranial volume (TIV), were analysed in relation to measures of cognitive deterioration and specific features of memory functions in 18 patients with probable Alzheimer's disease. Neuropsychological examination included the mini mental state examination (MMSE), the Mattis dementia rating scale (DRS), tests of executive functions, assessment of language abilities and praxis, the Wechsler memory scale (WMS), the California verbal learning test (CVLT) and the Grober and Buschke test. The volume of the hippocampal formation (HF/TIV) was correlated with specific memory variables: memory quotient and paired associates of the WMS; intrusions and discriminability at recognition for the Grober and Buschke test. By contrast, except for intrusions, no correlations were found between memory variables and the volume of amygdala (A/TIV). No correlations were found between the volume of caudate nuclei (CN/TIV) and any neuropsychological score. The volume of the hippocampal formation was therefore selectively related to quantitative and qualitative aspects of memory performance in patients with probable Alzheimer's disease. Images PMID:7745409

  15. Flood probability quantification for road infrastructure: Data-driven spatial-statistical approach and case study applications.

    PubMed

    Kalantari, Zahra; Cavalli, Marco; Cantone, Carolina; Crema, Stefano; Destouni, Georgia

    2017-03-01

    Climate-driven increase in the frequency of extreme hydrological events is expected to impose greater strain on the built environment and major transport infrastructure, such as roads and railways. This study develops a data-driven spatial-statistical approach to quantifying and mapping the probability of flooding at critical road-stream intersection locations, where water flow and sediment transport may accumulate and cause serious road damage. The approach is based on novel integration of key watershed and road characteristics, including also measures of sediment connectivity. The approach is concretely applied to and quantified for two specific study case examples in southwest Sweden, with documented road flooding effects of recorded extreme rainfall. The novel contributions of this study in combining a sediment connectivity account with that of soil type, land use, spatial precipitation-runoff variability and road drainage in catchments, and in extending the connectivity measure use for different types of catchments, improve the accuracy of model results for road flood probability. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 76 FR 770 - Proposed Information Collection; Comment Request; Monthly Wholesale Trade Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... reduces the time and cost of preparing mailout packages that contain unique variable data, while improving... developing productivity measurements. Estimates produced from the MWTS are based on a probability sample and..., excluding manufacturers' sales branches and offices. Estimated Number of Respondents: 4,500. Estimated Time...

  17. LIMITATIONS ON THE USES OF MULTIMEDIA EXPOSURE MEASUREMENTS FOR MULTIPATHWAY EXPOSURE ASSESSMENT - PART I: HANDLING OBSERVATIONS BELOW DETECTION LIMITS

    EPA Science Inventory

    Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...

  18. Economic Observations on the Decision to Attend Law School

    ERIC Educational Resources Information Center

    Ahart, Alan M.

    1975-01-01

    On the premise that the expected benefits of a legal education can be measured in dollar terms, the author develops a formula for determining whether or not to matriculate based on expected earnings, educational costs, and probability of employment (graduation, class rank, passing bar exam, and supply/demand conditions). (JT)

  19. Characteristic Functional of a Probability Measure Absolutely Continuous with Respect to a Gaussian Radon Measure

    DTIC Science & Technology

    1984-08-01

    12. PERSONAL AUTHORISI Hiroshi Sato 13* TYPE OF REPORT TECHNICAL 13b. TIME COVERED PROM TO 14. OATE OF REPORT (Yr. Mo., Day) Aug. 1984...nectuary and identify by bloc* number) Let p and p.. be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1...THIS PAGE ABSTRACT Let y and y1 be probability measures on a locally convex Hausdorff real topological linear space E. C.R. Baker [1] posed the

  20. Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances

    NASA Astrophysics Data System (ADS)

    Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng

    2016-04-01

    Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.

  1. Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.

    PubMed

    Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng

    2016-04-22

    Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.

  2. Testing the monogamy relations via rank-2 mixtures

    NASA Astrophysics Data System (ADS)

    Jung, Eylee; Park, DaeKil

    2016-10-01

    We introduce two tangle-based four-party entanglement measures t1 and t2, and two negativity-based measures n1 and n2, which are derived from the monogamy relations. These measures are computed for three four-qubit maximally entangled and W states explicitly. We also compute these measures for the rank-2 mixture ρ4=p | GHZ4>< GHZ4|+(1 -p ) | W4>< W4| by finding the corresponding optimal decompositions. It turns out that t1(ρ4) is trivial and the corresponding optimal decomposition is equal to the spectral decomposition. Probably, this triviality is a sign of the fact that the corresponding monogamy inequality is not sufficiently tight. We fail to compute t2(ρ4) due to the difficulty in the calculation of the residual entanglement. The negativity-based measures n1(ρ4) and n2(ρ4) are explicitly computed and the corresponding optimal decompositions are also derived explicitly.

  3. Interleaved Training and Training-Based Transmission Design for Hybrid Massive Antenna Downlink

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Jing, Yindi; Huang, Yongming; Yang, Luxi

    2018-06-01

    In this paper, we study the beam-based training design jointly with the transmission design for hybrid massive antenna single-user (SU) and multiple-user (MU) systems where outage probability is adopted as the performance measure. For SU systems, we propose an interleaved training design to concatenate the feedback and training procedures, thus making the training length adaptive to the channel realization. Exact analytical expressions are derived for the average training length and the outage probability of the proposed interleaved training. For MU systems, we propose a joint design for the beam-based interleaved training, beam assignment, and MU data transmissions. Two solutions for the beam assignment are provided with different complexity-performance tradeoff. Analytical results and simulations show that for both SU and MU systems, the proposed joint training and transmission designs achieve the same outage performance as the traditional full-training scheme but with significant saving in the training overhead.

  4. Extraction of decision rules via imprecise probabilities

    NASA Astrophysics Data System (ADS)

    Abellán, Joaquín; López, Griselda; Garach, Laura; Castellano, Javier G.

    2017-05-01

    Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.

  5. Probability of cavitation for single ultrasound pulses applied to tissues and tissue-mimicking materials

    PubMed Central

    Maxwell, Adam D.; Cain, Charles A.; Hall, Timothy L.; Fowlkes, J. Brian; Xu, Zhen

    2012-01-01

    In this article, the negative pressure values at which inertial cavitation consistently occurs in response to a single, 2-cycle, focused ultrasound pulse were measured in several media relevant to cavitation-based ultrasound therapy. The pulse was focused into a chamber containing one of the media, which included liquids, tissue-mimicking materials, and ex-vivo canine tissue. Focal waveforms were measured by two separate techniques using a fiber-optic hydrophone. Inertial cavitation was identified by high-speed photography in optically transparent media and an acoustic passive cavitation detector. The probability of cavitation (Pcav) for a single pulse as a function of peak negative pressure (p−) followed a sigmoid curve, with the probability approaching 1 when the pressure amplitude was sufficient. The statistical threshold (defined as Pcav = 0.5) was between p− = 26.0–30.0 MPa in all samples with a high water content, but varied between p− = 13.7 to > 36 MPa for other media. A model for radial cavitation bubble dynamics was employed to evaluate the behavior of cavitation nuclei at these pressure levels. A single bubble nucleus with an inertial cavitation threshold of p− = 28.2 MPa was estimated to have a 2.5 nm radius in distilled water. These data may be valuable for cavitation-based ultrasound therapy to predict the likelihood of cavitation at different pressure levels and dimensions of cavitation-induced lesions in tissue. PMID:23380152

  6. Interpretations of Probability in Quantum Mechanics: A Case of "Experimental Metaphysics"

    NASA Astrophysics Data System (ADS)

    Hellman, Geoffrey

    After reviewing paradigmatic cases of "experimental metaphysics" basing inferences against local realism and determinism on experimental tests of Bells theorem (and successors), we concentrate on clarifying the meaning and status of "objective probability" in quantum mechanics. The terms "objective" and "subjective" are found ambiguous and inadequate, masking crucial differences turning on the question of what the numerical values of probability functions measure vs. the question of the nature of the "events" on which such functions are defined. This leads naturally to a 2×2 matrix of types of interpretations, which are then illustrated with salient examples. (Of independent interest are the splitting of "Copenhagen interpretation" into "objective" and "subjective" varieties in one of the dimensions and the splitting of Bohmian hidden variables from (other) modal interpretations along that same dimension.) It is then explained why Everett interpretations are difficult to categorize in these terms. Finally, we argue that Bohmian mechanics does not seriously threaten the experimental-metaphysical case for ultimate randomness and purely physical probabilities.

  7. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  8. Boolean approach to dichotomic quantum measurement theories

    NASA Astrophysics Data System (ADS)

    Nagata, K.; Nakamura, T.; Batle, J.; Abdalla, S.; Farouk, A.

    2017-02-01

    Recently, a new measurement theory based on truth values was proposed by Nagata and Nakamura [Int. J. Theor. Phys. 55, 3616 (2016)], that is, a theory where the results of measurements are either 0 or 1. The standard measurement theory accepts a hidden variable model for a single Pauli observable. Hence, we can introduce a classical probability space for the measurement theory in this particular case. Additionally, we discuss in the present contribution the fact that projective measurement theories (the results of which are either +1 or -1) imply the Bell, Kochen, and Specker (BKS) paradox for a single Pauli observable. To justify our assertion, we present the BKS theorem in almost all the two-dimensional states by using a projective measurement theory. As an example, we present the BKS theorem in two-dimensions with white noise. Our discussion provides new insight into the quantum measurement problem by using this measurement theory based on the truth values.

  9. Dangerous "spin": the probability myth of evidence-based prescribing - a Merleau-Pontyian approach.

    PubMed

    Morstyn, Ron

    2011-08-01

    The aim of this study was to examine logical positivist statistical probability statements used to support and justify "evidence-based" prescribing rules in psychiatry when viewed from the major philosophical theories of probability, and to propose "phenomenological probability" based on Maurice Merleau-Ponty's philosophy of "phenomenological positivism" as a better clinical and ethical basis for psychiatric prescribing. The logical positivist statistical probability statements which are currently used to support "evidence-based" prescribing rules in psychiatry have little clinical or ethical justification when subjected to critical analysis from any of the major theories of probability and represent dangerous "spin" because they necessarily exclude the individual , intersubjective and ambiguous meaning of mental illness. A concept of "phenomenological probability" founded on Merleau-Ponty's philosophy of "phenomenological positivism" overcomes the clinically destructive "objectivist" and "subjectivist" consequences of logical positivist statistical probability and allows psychopharmacological treatments to be appropriately integrated into psychiatric treatment.

  10. THE SEMIGROUP OF METRIC MEASURE SPACES AND ITS INFINITELY DIVISIBLE PROBABILITY MEASURES

    PubMed Central

    EVANS, STEVEN N.; MOLCHANOV, ILYA

    2015-01-01

    A metric measure space is a complete, separable metric space equipped with a probability measure that has full support. Two such spaces are equivalent if they are isometric as metric spaces via an isometry that maps the probability measure on the first space to the probability measure on the second. The resulting set of equivalence classes can be metrized with the Gromov–Prohorov metric of Greven, Pfaffelhuber and Winter. We consider the natural binary operation ⊞ on this space that takes two metric measure spaces and forms their Cartesian product equipped with the sum of the two metrics and the product of the two probability measures. We show that the metric measure spaces equipped with this operation form a cancellative, commutative, Polish semigroup with a translation invariant metric. There is an explicit family of continuous semicharacters that is extremely useful for, inter alia, establishing that there are no infinitely divisible elements and that each element has a unique factorization into prime elements. We investigate the interaction between the semigroup structure and the natural action of the positive real numbers on this space that arises from scaling the metric. For example, we show that for any given positive real numbers a, b, c the trivial space is the only space that satisfies a ⊞ b = c . We establish that there is no analogue of the law of large numbers: if X1, X2, … is an identically distributed independent sequence of random spaces, then no subsequence of 1n⊞k=1nXk converges in distribution unless each Xk is almost surely equal to the trivial space. We characterize the infinitely divisible probability measures and the Lévy processes on this semigroup, characterize the stable probability measures and establish a counterpart of the LePage representation for the latter class. PMID:28065980

  11. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  12. MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping

    2015-02-01

    In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.

  13. Characterization of a measurement-based noiseless linear amplifier and its applications

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Haw, Jing Yan; Symul, Thomas; Lam, Ping Koy; Assad, Syed M.

    2017-07-01

    A noiseless linear amplifier (NLA) adds no noise to the signals it processes, which works only in a probabilistic way. It can be realized approximately with either a physical implementation that truncates the working space of the NLA on a photon-number basis or a measurement-based implementation that realizes the truncation virtually by a bounded postselection filter. To examine the relationship between these two approximate NLAs, we characterize in detail the measurement-based NLA and compare it with its physical counterpart in terms of their abilities to preserve the state Gaussianity and their probability of success. The link between these amplifiers is further clarified by integrating them into a measure-and-prepare setup. We stress the equivalence between the physical and the measurement-based approaches holds only when the effective parameters, the amplification gain, the cutoff, and the amplitude of the input state, are taken into account. Finally, we construct a 1-to-infinity cloner using the two amplifiers and show that a fidelity surpassing the no-cloning limit is achievable with the measurement-based NLA.

  14. A Bayesian approach to infer nitrogen loading rates from crop and land-use types surrounding private wells in the Central Valley, California

    NASA Astrophysics Data System (ADS)

    Ransom, Katherine M.; Bell, Andrew M.; Barber, Quinn E.; Kourakos, George; Harter, Thomas

    2018-05-01

    This study is focused on nitrogen loading from a wide variety of crop and land-use types in the Central Valley, California, USA, an intensively farmed region with high agricultural crop diversity. Nitrogen loading rates for several crop types have been measured based on field-scale experiments, and recent research has calculated nitrogen loading rates for crops throughout the Central Valley based on a mass balance approach. However, research is lacking to infer nitrogen loading rates for the broad diversity of crop and land-use types directly from groundwater nitrate measurements. Relating groundwater nitrate measurements to specific crops must account for the uncertainty about and multiplicity in contributing crops (and other land uses) to individual well measurements, and for the variability of nitrogen loading within farms and from farm to farm for the same crop type. In this study, we developed a Bayesian regression model that allowed us to estimate land-use-specific groundwater nitrogen loading rate probability distributions for 15 crop and land-use groups based on a database of recent nitrate measurements from 2149 private wells in the Central Valley. The water and natural, rice, and alfalfa and pasture groups had the lowest median estimated nitrogen loading rates, each with a median estimate below 5 kg N ha-1 yr-1. Confined animal feeding operations (dairies) and citrus and subtropical crops had the greatest median estimated nitrogen loading rates at approximately 269 and 65 kg N ha-1 yr-1, respectively. In general, our probability-based estimates compare favorably with previous direct measurements and with mass-balance-based estimates of nitrogen loading. Nitrogen mass-balance-based estimates are larger than our groundwater nitrate derived estimates for manured and nonmanured forage, nuts, cotton, tree fruit, and rice crops. These discrepancies are thought to be due to groundwater age mixing, dilution from infiltrating river water, or denitrification between the time when nitrogen leaves the root zone (point of reference for mass-balance-derived loading) and the time and location of groundwater measurement.

  15. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  16. Mars Exploration Rovers Landing Dispersion Analysis

    NASA Technical Reports Server (NTRS)

    Knocke, Philip C.; Wawrzyniak, Geoffrey G.; Kennedy, Brian M.; Desai, Prasun N.; Parker, TImothy J.; Golombek, Matthew P.; Duxbury, Thomas C.; Kass, David M.

    2004-01-01

    Landing dispersion estimates for the Mars Exploration Rover missions were key elements in the site targeting process and in the evaluation of landing risk. This paper addresses the process and results of the landing dispersion analyses performed for both Spirit and Opportunity. The several contributors to landing dispersions (navigation and atmospheric uncertainties, spacecraft modeling, winds, and margins) are discussed, as are the analysis tools used. JPL's MarsLS program, a MATLAB-based landing dispersion visualization and statistical analysis tool, was used to calculate the probability of landing within hazardous areas. By convolving this with the probability of landing within flight system limits (in-spec landing) for each hazard area, a single overall measure of landing risk was calculated for each landing ellipse. In-spec probability contours were also generated, allowing a more synoptic view of site risks, illustrating the sensitivity to changes in landing location, and quantifying the possible consequences of anomalies such as incomplete maneuvers. Data and products required to support these analyses are described, including the landing footprints calculated by NASA Langley's POST program and JPL's AEPL program, cartographically registered base maps and hazard maps, and flight system estimates of in-spec landing probabilities for each hazard terrain type. Various factors encountered during operations, including evolving navigation estimates and changing atmospheric models, are discussed and final landing points are compared with approach estimates.

  17. Probabilistic mapping of descriptive health status responses onto health state utilities using Bayesian networks: an empirical analysis converting SF-12 into EQ-5D utility index in a national US sample.

    PubMed

    Le, Quang A; Doctor, Jason N

    2011-05-01

    As quality-adjusted life years have become the standard metric in health economic evaluations, mapping health-profile or disease-specific measures onto preference-based measures to obtain quality-adjusted life years has become a solution when health utilities are not directly available. However, current mapping methods are limited due to their predictive validity, reliability, and/or other methodological issues. We employ probability theory together with a graphical model, called a Bayesian network, to convert health-profile measures into preference-based measures and to compare the results to those estimated with current mapping methods. A sample of 19,678 adults who completed both the 12-item Short Form Health Survey (SF-12v2) and EuroQoL 5D (EQ-5D) questionnaires from the 2003 Medical Expenditure Panel Survey was split into training and validation sets. Bayesian networks were constructed to explore the probabilistic relationships between each EQ-5D domain and 12 items of the SF-12v2. The EQ-5D utility scores were estimated on the basis of the predicted probability of each response level of the 5 EQ-5D domains obtained from the Bayesian inference process using the following methods: Monte Carlo simulation, expected utility, and most-likely probability. Results were then compared with current mapping methods including multinomial logistic regression, ordinary least squares, and censored least absolute deviations. The Bayesian networks consistently outperformed other mapping models in the overall sample (mean absolute error=0.077, mean square error=0.013, and R overall=0.802), in different age groups, number of chronic conditions, and ranges of the EQ-5D index. Bayesian networks provide a new robust and natural approach to map health status responses into health utility measures for health economic evaluations.

  18. Interaction Forces and Aggregation Rates of Colloidal Latex Particles in the Presence of Monovalent Counterions.

    PubMed

    Montes Ruiz-Cabello, F Javier; Trefalt, Gregor; Oncsik, Tamas; Szilagyi, Istvan; Maroni, Plinio; Borkovec, Michal

    2015-06-25

    Force profiles and aggregation rates involving positively and negatively charged polystyrene latex particles are investigated in monovalent electrolyte solutions, whereby the counterions are varied within the Hofmeister series. The force measurements are carried out with the colloidal probe technique, which is based on the atomic force microscope (AFM), while the aggregation rates are measured with time-resolved multiangle light scattering. The interaction force profiles cannot be described by classical DLVO theory, but an additional attractive short-ranged force must be included. An exponential force profile with a decay length of about 0.5 nm is consistent with the measured forces. Furthermore, the Hamaker constants extracted from the measured force profiles are substantially smaller than the theoretical values calculated from dielectric spectra. The small surface roughness of the latex particles (below 1 nm) is probably responsible for this deviation. Based on the measured force profiles, the aggregation rates can be predicted without adjustable parameters. The measured absolute aggregation rates in the fast regime are somewhat lower than the calculated ones. The critical coagulation concentration (CCC) agrees well with the experiment, including the respective shifts of the CCC within the Hofmeister series. These shifts are particularly pronounced for the positively charged particles. However, the consideration of the additional attractive short-ranged force is essential to quantify these shifts correctly. In the slow regime, the calculated rates are substantially smaller than the experimental ones. This disagreement is probably related to surface charge heterogeneities.

  19. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF.

    PubMed

    Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul

    2006-09-01

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.

  20. A computer simulated phantom study of tomotherapy dose optimization based on probability density functions (PDF) and potential errors caused by low reproducibility of PDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke; Cai Jing; Brookeman, James

    2006-09-15

    Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less

  1. Quantifying the benefit of wellbore leakage potential estimates for prioritizing long-term MVA well sampling at a CO2 storage site.

    PubMed

    Azzolina, Nicholas A; Small, Mitchell J; Nakles, David V; Glazewski, Kyle A; Peck, Wesley D; Gorecki, Charles D; Bromhal, Grant S; Dilmore, Robert M

    2015-01-20

    This work uses probabilistic methods to simulate a hypothetical geologic CO2 storage site in a depleted oil and gas field, where the large number of legacy wells would make it cost-prohibitive to sample all wells for all measurements as part of the postinjection site care. Deep well leakage potential scores were assigned to the wells using a random subsample of 100 wells from a detailed study of 826 legacy wells that penetrate the basal Cambrian formation on the U.S. side of the U.S./Canadian border. Analytical solutions and Monte Carlo simulations were used to quantify the statistical power of selecting a leaking well. Power curves were developed as a function of (1) the number of leaking wells within the Area of Review; (2) the sampling design (random or judgmental, choosing first the wells with the highest deep leakage potential scores); (3) the number of wells included in the monitoring sampling plan; and (4) the relationship between a well’s leakage potential score and its relative probability of leakage. Cases where the deep well leakage potential scores are fully or partially informative of the relative leakage probability are compared to a noninformative base case in which leakage is equiprobable across all wells in the Area of Review. The results show that accurate prior knowledge about the probability of well leakage adds measurable value to the ability to detect a leaking well during the monitoring program, and that the loss in detection ability due to imperfect knowledge of the leakage probability can be quantified. This work underscores the importance of a data-driven, risk-based monitoring program that incorporates uncertainty quantification into long-term monitoring sampling plans at geologic CO2 storage sites.

  2. On evaluating clustering procedures for use in classification

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Moritz, T. E.; Register, D. T.; Yao, S. S.; Eppler, W. G. (Principal Investigator)

    1979-01-01

    The problem of evaluating clustering algorithms and their respective computer programs for use in a preprocessing step for classification is addressed. In clustering for classification the probability of correct classification is suggested as the ultimate measure of accuracy on training data. A means of implementing this criterion and a measure of cluster purity are discussed. Examples are given. A procedure for cluster labeling that is based on cluster purity and sample size is presented.

  3. Filtering data from the collaborative initial glaucoma treatment study for improved identification of glaucoma progression.

    PubMed

    Schell, Greggory J; Lavieri, Mariel S; Stein, Joshua D; Musch, David C

    2013-12-21

    Open-angle glaucoma (OAG) is a prevalent, degenerate ocular disease which can lead to blindness without proper clinical management. The tests used to assess disease progression are susceptible to process and measurement noise. The aim of this study was to develop a methodology which accounts for the inherent noise in the data and improve significant disease progression identification. Longitudinal observations from the Collaborative Initial Glaucoma Treatment Study (CIGTS) were used to parameterize and validate a Kalman filter model and logistic regression function. The Kalman filter estimates the true value of biomarkers associated with OAG and forecasts future values of these variables. We develop two logistic regression models via generalized estimating equations (GEE) for calculating the probability of experiencing significant OAG progression: one model based on the raw measurements from CIGTS and another model based on the Kalman filter estimates of the CIGTS data. Receiver operating characteristic (ROC) curves and associated area under the ROC curve (AUC) estimates are calculated using cross-fold validation. The logistic regression model developed using Kalman filter estimates as data input achieves higher sensitivity and specificity than the model developed using raw measurements. The mean AUC for the Kalman filter-based model is 0.961 while the mean AUC for the raw measurements model is 0.889. Hence, using the probability function generated via Kalman filter estimates and GEE for logistic regression, we are able to more accurately classify patients and instances as experiencing significant OAG progression. A Kalman filter approach for estimating the true value of OAG biomarkers resulted in data input which improved the accuracy of a logistic regression classification model compared to a model using raw measurements as input. This methodology accounts for process and measurement noise to enable improved discrimination between progression and nonprogression in chronic diseases.

  4. Coherent attack on oblivious transfer based on single-qubit rotations

    NASA Astrophysics Data System (ADS)

    He, Guang Ping

    2018-04-01

    Recently a bit-string quantum oblivious transfer (OT) protocol based on single-qubit rotations was proposed (Rodrigues et al 2017 J. Phys. A: Math. Theor. 50 205301) and proven secure against few-qubit measurements. However, it was left as an open question whether the protocol remains secure against general attacks. Here, we close the gap by showing that if the receiver Bob can perform collective measurements on all qubits, then he can learn Alice’s secret message with a probability close to one. Thus the protocol fails to meet the security criterion of OT.

  5. Tolerancing aspheres based on manufacturing knowledge

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Kokot, S.; Fuchs, U.

    2017-10-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  6. Tolerancing aspheres based on manufacturing statistics

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Möhl, A.; Fuchs, U.

    2017-11-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  7. Study of Heavy-ion Induced Fission for Heavy Element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2014-05-01

    Fission fragment mass distributions were measured in heavy-ion induced fission of 238U. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and quasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model for the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that sub-barrier energies can be used for heavy element synthesis.

  8. Landslide Hazard from Coupled Inherent and Dynamic Probabilities

    NASA Astrophysics Data System (ADS)

    Strauch, R. L.; Istanbulluoglu, E.; Nudurupati, S. S.

    2015-12-01

    Landslide hazard research has typically been conducted independently from hydroclimate research. We sought to unify these two lines of research to provide regional scale landslide hazard information for risk assessments and resource management decision-making. Our approach couples an empirical inherent landslide probability, based on a frequency ratio analysis, with a numerical dynamic probability, generated by combining subsurface water recharge and surface runoff from the Variable Infiltration Capacity (VIC) macro-scale land surface hydrologic model with a finer resolution probabilistic slope stability model. Landslide hazard mapping is advanced by combining static and dynamic models of stability into a probabilistic measure of geohazard prediction in both space and time. This work will aid resource management decision-making in current and future landscape and climatic conditions. The approach is applied as a case study in North Cascade National Park Complex in northern Washington State.

  9. Seismic fragility assessment of low-rise stone masonry buildings

    NASA Astrophysics Data System (ADS)

    Abo-El-Ezz, Ahmad; Nollet, Marie-José; Nastev, Miroslav

    2013-03-01

    Many historic buildings in old urban centers in Eastern Canada are made of stone masonry reputed to be highly vulnerable to seismic loads. Seismic risk assessment of stone masonry buildings is therefore the first step in the risk mitigation process to provide adequate planning for retrofit and preservation of historical urban centers. This paper focuses on development of analytical displacement-based fragility curves reflecting the characteristics of existing stone masonry buildings in Eastern Canada. The old historic center of Quebec City has been selected as a typical study area. The standard fragility analysis combines the inelastic spectral displacement, a structure-dependent earthquake intensity measure, and the building damage state correlated to the induced building displacement. The proposed procedure consists of a three-step development process: (1) mechanics-based capacity model, (2) displacement-based damage model and (3) seismic demand model. The damage estimation for a uniform hazard scenario of 2% in 50 years probability of exceedance indicates that slight to moderate damage is the most probable damage experienced by these stone masonry buildings. Comparison is also made with fragility curves implicit in the seismic risk assessment tools Hazus and ELER. Hazus shows the highest probability of the occurrence of no to slight damage, whereas the highest probability of extensive and complete damage is predicted with ELER. This comparison shows the importance of the development of fragility curves specific to the generic construction characteristics in the study area and emphasizes the need for critical use of regional risk assessment tools and generated results.

  10. A Decision Support System for effective use of probability forecasts

    NASA Astrophysics Data System (ADS)

    De Kleermaeker, Simone; Verkade, Jan

    2013-04-01

    Often, water management decisions are based on hydrological forecasts. These forecasts, however, are affected by inherent uncertainties. It is increasingly common for forecasting agencies to make explicit estimates of these uncertainties and thus produce probabilistic forecasts. Associated benefits include the decision makers' increased awareness of forecasting uncertainties and the potential for risk-based decision-making. Also, a stricter separation of responsibilities between forecasters and decision maker can be made. However, simply having probabilistic forecasts available is not sufficient to realise the associated benefits. Additional effort is required in areas such as forecast visualisation and communication, decision making in uncertainty and forecast verification. Also, revised separation of responsibilities requires a shift in institutional arrangements and responsibilities. A recent study identified a number of additional issues related to the effective use of probability forecasts. When moving from deterministic to probability forecasting, a dimension is added to an already multi-dimensional problem; this makes it increasingly difficult for forecast users to extract relevant information from a forecast. A second issue is that while probability forecasts provide a necessary ingredient for risk-based decision making, other ingredients may not be present. For example, in many cases no estimates of flood damage, of costs of management measures and of damage reduction are available. This paper presents the results of the study, including some suggestions for resolving these issues and the integration of those solutions in a prototype decision support system (DSS). A pathway for further development of the DSS is outlined.

  11. Rényi and Tsallis formulations of separability conditions in finite dimensions

    NASA Astrophysics Data System (ADS)

    Rastegin, Alexey E.

    2017-12-01

    Separability conditions for a bipartite quantum system of finite-dimensional subsystems are formulated in terms of Rényi and Tsallis entropies. Entropic uncertainty relations often lead to entanglement criteria. We propose new approach based on the convolution of discrete probability distributions. Measurements on a total system are constructed of local ones according to the convolution scheme. Separability conditions are derived on the base of uncertainty relations of the Maassen-Uffink type as well as majorization relations. On each of subsystems, we use a pair of sets of subnormalized vectors that form rank-one POVMs. We also obtain entropic separability conditions for local measurements with a special structure, such as mutually unbiased bases and symmetric informationally complete measurements. The relevance of the derived separability conditions is demonstrated with several examples.

  12. Work Measurement as a Generalized Quantum Measurement

    NASA Astrophysics Data System (ADS)

    Roncaglia, Augusto J.; Cerisola, Federico; Paz, Juan Pablo

    2014-12-01

    We present a new method to measure the work w performed on a driven quantum system and to sample its probability distribution P (w ). The method is based on a simple fact that remained unnoticed until now: Work on a quantum system can be measured by performing a generalized quantum measurement at a single time. Such measurement, which technically speaking is denoted as a positive operator valued measure reduces to an ordinary projective measurement on an enlarged system. This observation not only demystifies work measurement but also suggests a new quantum algorithm to efficiently sample the distribution P (w ). This can be used, in combination with fluctuation theorems, to estimate free energies of quantum states on a quantum computer.

  13. A possible loophole in the theorem of Bell.

    PubMed

    Hess, K; Philipp, W

    2001-12-04

    The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.

  14. Comparative Risk Analysis of Two Culicoides-Borne Diseases in Horses: Equine Encephalosis More Likely to Enter France than African Horse Sickness.

    PubMed

    Faverjon, C; Leblond, A; Lecollinet, S; Bødker, R; de Koeijer, A A; Fischer, E A J

    2017-12-01

    African horse sickness (AHS) and equine encephalosis (EE) are Culicoides-borne viral diseases that could have the potential to spread across Europe if introduced, thus being potential threats for the European equine industry. Both share similar epidemiology, transmission patterns and geographical distribution. Using stochastic spatiotemporal models of virus entry, we assessed and compared the probabilities of both viruses entering France via two pathways: importation of live-infected animals or importation of infected vectors. Analyses were performed for three consecutive years (2010-2012). Seasonal and regional differences in virus entry probabilities were the same for both diseases. However, the probability of EE entry was much higher than the probability of AHS entry. Interestingly, the most likely entry route differed between AHS and EE: AHS has a higher probability to enter through an infected vector and EE has a higher probability to enter through an infectious host. Consequently, different effective protective measures were identified by 'what-if' scenarios for the two diseases. The implementation of vector protection on all animals (equine and bovine) coming from low-risk regions before their importation was the most effective in reducing the probability of AHS entry. On the other hand, the most significant reduction in the probability of EE entry was obtained by the implementation of quarantine before import for horses coming from both EU and non-EU countries. The developed models can be useful to implement risk-based surveillance. © 2016 Blackwell Verlag GmbH.

  15. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    NASA Astrophysics Data System (ADS)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  16. Estimating soil moisture exceedance probability from antecedent rainfall

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.

    2016-12-01

    The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.

  17. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  18. New Splitting Criteria for Decision Trees in Stationary Data Streams.

    PubMed

    Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Jaworski, Maciej; Duda, Piotr; Rutkowski, Leszek; Rutkowski, Leszek; Duda, Piotr; Jaworski, Maciej

    2018-06-01

    The most popular tools for stream data mining are based on decision trees. In previous 15 years, all designed methods, headed by the very fast decision tree algorithm, relayed on Hoeffding's inequality and hundreds of researchers followed this scheme. Recently, we have demonstrated that although the Hoeffding decision trees are an effective tool for dealing with stream data, they are a purely heuristic procedure; for example, classical decision trees such as ID3 or CART cannot be adopted to data stream mining using Hoeffding's inequality. Therefore, there is an urgent need to develop new algorithms, which are both mathematically justified and characterized by good performance. In this paper, we address this problem by developing a family of new splitting criteria for classification in stationary data streams and investigating their probabilistic properties. The new criteria, derived using appropriate statistical tools, are based on the misclassification error and the Gini index impurity measures. The general division of splitting criteria into two types is proposed. Attributes chosen based on type- splitting criteria guarantee, with high probability, the highest expected value of split measure. Type- criteria ensure that the chosen attribute is the same, with high probability, as it would be chosen based on the whole infinite data stream. Moreover, in this paper, two hybrid splitting criteria are proposed, which are the combinations of single criteria based on the misclassification error and Gini index.

  19. Economic Choices Reveal Probability Distortion in Macaque Monkeys

    PubMed Central

    Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-01-01

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. PMID:25698750

  20. Economic choices reveal probability distortion in macaque monkeys.

    PubMed

    Stauffer, William R; Lak, Armin; Bossaerts, Peter; Schultz, Wolfram

    2015-02-18

    Economic choices are largely determined by two principal elements, reward value (utility) and probability. Although nonlinear utility functions have been acknowledged for centuries, nonlinear probability weighting (probability distortion) was only recently recognized as a ubiquitous aspect of real-world choice behavior. Even when outcome probabilities are known and acknowledged, human decision makers often overweight low probability outcomes and underweight high probability outcomes. Whereas recent studies measured utility functions and their corresponding neural correlates in monkeys, it is not known whether monkeys distort probability in a manner similar to humans. Therefore, we investigated economic choices in macaque monkeys for evidence of probability distortion. We trained two monkeys to predict reward from probabilistic gambles with constant outcome values (0.5 ml or nothing). The probability of winning was conveyed using explicit visual cues (sector stimuli). Choices between the gambles revealed that the monkeys used the explicit probability information to make meaningful decisions. Using these cues, we measured probability distortion from choices between the gambles and safe rewards. Parametric modeling of the choices revealed classic probability weighting functions with inverted-S shape. Therefore, the animals overweighted low probability rewards and underweighted high probability rewards. Empirical investigation of the behavior verified that the choices were best explained by a combination of nonlinear value and nonlinear probability distortion. Together, these results suggest that probability distortion may reflect evolutionarily preserved neuronal processing. Copyright © 2015 Stauffer et al.

  1. Lost in folding space? Comparing four variants of the thermodynamic model for RNA secondary structure prediction.

    PubMed

    Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert

    2011-11-03

    Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.

  2. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  3. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  4. Random Walks on Homeo( S 1)

    NASA Astrophysics Data System (ADS)

    Malicet, Dominique

    2017-12-01

    In this paper, we study random walks {g_n=f_{n-1}\\ldots f_0} on the group Homeo ( S 1) of the homeomorphisms of the circle, where the homeomorphisms f k are chosen randomly, independently, with respect to a same probability measure {ν}. We prove that under the only condition that there is no probability measure invariant by {ν}-almost every homeomorphism, the random walk almost surely contracts small intervals. It generalizes what has been known on this subject until now, since various conditions on {ν} were imposed in order to get the phenomenon of contractions. Moreover, we obtain the surprising fact that the rate of contraction is exponential, even in the lack of assumptions of smoothness on the f k 's. We deduce various dynamical consequences on the random walk ( g n ): finiteness of ergodic stationary measures, distribution of the trajectories, asymptotic law of the evaluations, etc. The proof of the main result is based on a modification of the Ávila-Viana's invariance principle, working for continuous cocycles on a space fibred in circles.

  5. When Using the Mean is Meaningless: Examples from Probability Theory and Cardiology.

    ERIC Educational Resources Information Center

    Liebovitch, Larry S.; Todorov, Angelo T.; Wood, Mark A.; Ellenbogen, Kenneth A.

    This chapter describes how the mean of fractal processes does not exist and is not a meaningful measure of some data. It discusses how important it is to stay open to the possibility that sometimes analytic techniques fail to satisfy some assumptions on which the mean is based. (KHR)

  6. Height and Weight of Children: United States.

    ERIC Educational Resources Information Center

    Hamill, Peter V. V.; And Others

    This report contains national estimates based on findings from the Health Examination Survey in 1963-65 on height and weight measurements of children 6- to 11-years-old. A nationwide probability sample of 7,119 children was selected to represent the noninstitutionalized children (about 24 million) in this age group. Height was obtained in stocking…

  7. Fundamental Studies of Molecular Secondary Ion Mass Spectrometry Ionization Probability Measured With Femtosecond, Infrared Laser Post-Ionization

    NASA Astrophysics Data System (ADS)

    Popczun, Nicholas James

    The work presented in this dissertation is focused on increasing the fundamental understanding of molecular secondary ion mass spectrometry (SIMS) ionization probability by measuring neutral molecule behavior with femtosecond, mid-infrared laser post-ionization (LPI). To accomplish this, a model system was designed with a homogeneous organic film comprised of coronene, a polycyclic hydrocarbon which provides substantial LPI signal. Careful consideration was given to signal lost to photofragmentation and undersampling of the sputtered plume that is contained within the extraction volume of the mass spectrometer. This study provided the first ionization probability for an organic compound measured directly by the relative secondary ions and sputtered neutral molecules using a strong-field ionization (SFI) ionization method. The measured value of ˜10-3 is near the upper limit of previous estimations of ionization probability for organic molecules. The measurement method was refined, and then applied to a homogeneous guanine film, which produces protonated secondary ions. This measurement found the probability of protonation to occur to be on the order of 10-3, although with less uncertainty than that of the coronene. Finally, molecular depth profiles were obtained for SIMS and LPI signals as a function of primary ion fluence to determine the effect of ionization probability on the depth resolution of chemical interfaces. The interfaces chosen were organic/inorganic interfaces to limit chemical mixing. It is shown that approaching the inorganic chemical interface can enhance or suppress the ionization probability for the organic molecule, which can lead to artificially sharpened or broadened depths, respectively. Overall, the research described in this dissertation provides new methods for measuring ionization efficiency in SIMS in both absolute and relative terms, and will inform both innovation in the technique, as well as increase understanding of depth-dependent experiments.

  8. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the ;unusual; properties of the WV, and avoid the ;paradoxes; often associated with the WM.

  9. ERMiT: Estimating Post-Fire Erosion in Probabilistic Terms

    NASA Astrophysics Data System (ADS)

    Pierson, F. B.; Robichaud, P. R.; Elliot, W. J.; Hall, D. E.; Moffet, C. A.

    2006-12-01

    Mitigating the impact of post-wildfire runoff and erosion on life, property, and natural resources have cost the United States government tens of millions of dollars over the past decade. The decision of where, when, and how to apply the most effective mitigation treatments requires land managers to assess the risk of damaging runoff and erosion events occurring after a fire. The Erosion Risk Management Tool (ERMiT) is a web-based application that estimates erosion in probabilistic terms on burned and recovering forest, range, and chaparral lands. Unlike most erosion prediction models, ERMiT does not provide `average annual erosion rates;' rather, it provides a distribution of erosion rates with the likelihood of their occurrence. ERMiT combines rain event variability with spatial and temporal variabilities of hillslope burn severity, soil properties, and ground cover to estimate Water Erosion Prediction Project (WEPP) model input parameter values. Based on 20 to 40 individual WEPP runs, ERMiT produces a distribution of rain event erosion rates with a probability of occurrence for each of five post-fire years. Over the 5 years of modeled recovery, the occurrence probability of the less erodible soil parameters is increased and the occurrence probability of the more erodible soil parameters is decreased. In addition, the occurrence probabilities and the four spatial arrangements of burn severity (arrangements of overland flow elements (OFE's)), are shifted toward lower burn severity with each year of recovery. These yearly adjustments are based on field measurements made through post-fire recovery periods. ERMiT also provides rain event erosion rate distributions for hillslopes that have been treated with seeding, straw mulch, straw wattles and contour-felled log erosion barriers. Such output can help managers make erosion mitigation treatment decisions based on the probability of high sediment yields occurring, the value of resources at risk for damage, cost, and other management considerations.

  10. What is preexisting strength? Predicting free association probabilities, similarity ratings, and cued recall probabilities.

    PubMed

    Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B

    2005-08-01

    Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength.

  11. An intelligent system based on fuzzy probabilities for medical diagnosis– a study in aphasia diagnosis*

    PubMed Central

    Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi

    2009-01-01

    BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867

  12. Measuring attention using the Posner cuing paradigm: the role of across and within trial target probabilities

    PubMed Central

    Hayward, Dana A.; Ristic, Jelena

    2013-01-01

    Numerous studies conducted within the recent decades have utilized the Posner cuing paradigm for eliciting, measuring, and theoretically characterizing attentional orienting. However, the data from recent studies suggest that the Posner cuing task might not provide an unambiguous measure of attention, as reflexive spatial orienting has been found to interact with extraneous processes engaged by the task's typical structure, i.e., the probability of target presence across trials, which affects tonic alertness, and the probability of target presence within trials, which affects voluntary temporal preparation. To understand the contribution of each of these two processes to the measurement of attentional orienting we assessed their individual and combined effects on reflexive attention elicited by a spatially nonpredictive peripheral cue. Our results revealed that the magnitude of spatial orienting was modulated by joint changes in the global probability of target presence across trials and the local probability of target presence within trials, while the time course of spatial orienting was susceptible to changes in the probability of target presence across trials. These data thus raise important questions about the choice of task parameters within the Posner cuing paradigm and their role in both the measurement and theoretical attributions of the observed attentional effects. PMID:23730280

  13. Emergency Assessment of Debris-Flow Hazards from Basins Burned by the Piru, Simi, and Verdale Fires of 2003, Southern California

    USGS Publications Warehouse

    Cannon, Susan H.; Gartner, Joseph E.; Rupert, Michael G.; Michael, John A.

    2003-01-01

    These maps present preliminary assessments of the probability of debris-flow activity and estimates of peak discharges that can potentially be generated by debris-flows issuing from basins burned by the Piru, Simi and Verdale Fires of October 2003 in southern California in response to the 25-year, 10-year, and 2-year 1-hour rain storms. The probability maps are based on the application of a logistic multiple regression model that describes the percent chance of debris-flow production from an individual basin as a function of burned extent, soil properties, basin gradients and storm rainfall. The peak discharge maps are based on application of a multiple-regression model that can be used to estimate debris-flow peak discharge at a basin outlet as a function of basin gradient, burn extent, and storm rainfall. Probabilities of debris-flow occurrence for the Piru Fire range between 2 and 94% and estimates of debris flow peak discharges range between 1,200 and 6,640 ft3/s (34 to 188 m3/s). Basins burned by the Simi Fire show probabilities for debris-flow occurrence between 1 and 98%, and peak discharge estimates between 1,130 and 6,180 ft3/s (32 and 175 m3/s). The probabilities for debris-flow activity calculated for the Verdale Fire range from negligible values to 13%. Peak discharges were not estimated for this fire because of these low probabilities. These maps are intended to identify those basins that are most prone to the largest debris-flow events and provide information for the preliminary design of mitigation measures and for the planning of evacuation timing and routes.

  14. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  15. Fitness prospects: effects of age, sex and recruitment age on reproductive value in a long-lived seabird.

    PubMed

    Zhang, He; Rebke, Maren; Becker, Peter H; Bouwhuis, Sandra

    2015-01-01

    Reproductive value is an integrated measure of survival and reproduction fundamental to understanding life-history evolution and population dynamics, but little is known about intraspecific variation in reproductive value and factors explaining such variation, if any. By applying generalized additive mixed models to longitudinal individual-based data of the common tern Sterna hirundo, we estimated age-specific annual survival probability, breeding probability and reproductive performance, based on which we calculated age-specific reproductive values. We investigated effects of sex and recruitment age (RA) on each trait. We found age effects on all traits, with survival and breeding probability declining with age, while reproductive performance first improved with age before levelling off. We only found a very small, marginally significant, sex effect on survival probability, but evidence for decreasing age-specific breeding probability and reproductive performance with RA. As a result, males had slightly lower age-specific reproductive values than females, while birds of both sexes that recruited at the earliest ages of 2 and 3 years (i.e. 54% of the tern population) had somewhat higher fitness prospects than birds recruiting at later ages. While the RA effects on breeding probability and reproductive performance were statistically significant, these effects were not large enough to translate to significant effects on reproductive value. Age-specific reproductive values provided evidence for senescence, which came with fitness costs in a range of 17-21% for the sex-RA groups. Our study suggests that intraspecific variation in reproductive value may exist, but that, in the common tern, the differences are small. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.

  16. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability.

    PubMed

    Dawson, Michael R W; Gupta, Maya

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent's environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned.

  17. Probability matching in perceptrons: Effects of conditional dependence and linear nonseparability

    PubMed Central

    2017-01-01

    Probability matching occurs when the behavior of an agent matches the likelihood of occurrence of events in the agent’s environment. For instance, when artificial neural networks match probability, the activity in their output unit equals the past probability of reward in the presence of a stimulus. Our previous research demonstrated that simple artificial neural networks (perceptrons, which consist of a set of input units directly connected to a single output unit) learn to match probability when presented different cues in isolation. The current paper extends this research by showing that perceptrons can match probabilities when presented simultaneous cues, with each cue signaling different reward likelihoods. In our first simulation, we presented up to four different cues simultaneously; the likelihood of reward signaled by the presence of one cue was independent of the likelihood of reward signaled by other cues. Perceptrons learned to match reward probabilities by treating each cue as an independent source of information about the likelihood of reward. In a second simulation, we violated the independence between cues by making some reward probabilities depend upon cue interactions. We did so by basing reward probabilities on a logical combination (AND or XOR) of two of the four possible cues. We also varied the size of the reward associated with the logical combination. We discovered that this latter manipulation was a much better predictor of perceptron performance than was the logical structure of the interaction between cues. This indicates that when perceptrons learn to match probabilities, they do so by assuming that each signal of a reward is independent of any other; the best predictor of perceptron performance is a quantitative measure of the independence of these input signals, and not the logical structure of the problem being learned. PMID:28212422

  18. Classical Physics and the Bounds of Quantum Correlations.

    PubMed

    Frustaglia, Diego; Baltanás, José P; Velázquez-Ahumada, María C; Fernández-Prieto, Armando; Lujambio, Aintzane; Losada, Vicente; Freire, Manuel J; Cabello, Adán

    2016-06-24

    A unifying principle explaining the numerical bounds of quantum correlations remains elusive, despite the efforts devoted to identifying it. Here, we show that these bounds are indeed not exclusive to quantum theory: for any abstract correlation scenario with compatible measurements, models based on classical waves produce probability distributions indistinguishable from those of quantum theory and, therefore, share the same bounds. We demonstrate this finding by implementing classical microwaves that propagate along meter-size transmission-line circuits and reproduce the probabilities of three emblematic quantum experiments. Our results show that the "quantum" bounds would also occur in a classical universe without quanta. The implications of this observation are discussed.

  19. Assessment of accident severity in the construction industry using the Bayesian theorem.

    PubMed

    Alizadeh, Seyed Shamseddin; Mortazavi, Seyed Bagher; Mehdi Sepehri, Mohammad

    2015-01-01

    Construction is a major source of employment in many countries. In construction, workers perform a great diversity of activities, each one with a specific associated risk. The aim of this paper is to identify workers who are at risk of accidents with severe consequences and classify these workers to determine appropriate control measures. We defined 48 groups of workers and used the Bayesian theorem to estimate posterior probabilities about the severity of accidents at the level of individuals in construction sector. First, the posterior probabilities of injuries based on four variables were provided. Then the probabilities of injury for 48 groups of workers were determined. With regard to marginal frequency of injury, slight injury (0.856), fatal injury (0.086) and severe injury (0.058) had the highest probability of occurrence. It was observed that workers with <1 year's work experience (0.168) had the highest probability of injury occurrence. The first group of workers, who were extensively exposed to risk of severe and fatal accidents, involved workers ≥ 50 years old, married, with 1-5 years' work experience, who had no past accident experience. The findings provide a direction for more effective safety strategies and occupational accident prevention and emergency programmes.

  20. CPROB: A COMPUTATIONAL TOOL FOR CONDUCTING CONDITIONAL PROBABILITY ANALYSIS

    EPA Science Inventory

    Conditional probability analysis measures the probability of observing one event given that another event has occurred. In an environmental context, conditional probability analysis helps assess the association between an environmental contaminant (i.e. the stressor) and the ec...

  1. Atmospheric Visibility Monitoring for planetary optical communications

    NASA Technical Reports Server (NTRS)

    Cowles, Kelly

    1991-01-01

    The Atmospheric Visibility Monitoring project endeavors to improve current atmospheric models and generate visibility statistics relevant to prospective earth-satellite optical communications systems. Three autonomous observatories are being used to measure atmospheric conditions on the basis of observed starlight; these data will yield clear-sky and transmission statistics for three sites with high clear-sky probabilities. Ground-based data will be compared with satellite imagery to determine the correlation between satellite data and ground-based observations.

  2. Evaluation of the 1077 keV γ-ray emission probability from 68Ga decay

    NASA Astrophysics Data System (ADS)

    Huang, Xiao-Long; Jiang, Li-Yang; Chen, Xiong-Jun; Chen, Guo-Chang

    2014-04-01

    68Ga decays to the excited states of 68Zn through the electron capture decay mode. New recommended values for the emission probability of 1077 keV γ-ray given by the ENSDF and DDEP databases all use data from absolute measurements. In 2011, JIANG Li-Yang deduced a new value for 1077 keV γ-ray emission probability by measuring the 69Ga(n,2n) 68Ga reaction cross section. The new value is about 20% lower than values obtained from previous absolute measurements and evaluations. In this paper, the discrepancies among the measurements and evaluations are analyzed carefully and the new values are re-recommended. Our recommended value for the emission probability of 1077 keV γ-ray is (2.72±0.16)%.

  3. Quantum-Bayesian coherence

    NASA Astrophysics Data System (ADS)

    Fuchs, Christopher A.; Schack, Rüdiger

    2013-10-01

    In the quantum-Bayesian interpretation of quantum theory (or QBism), the Born rule cannot be interpreted as a rule for setting measurement-outcome probabilities from an objective quantum state. But if not, what is the role of the rule? In this paper, the argument is given that it should be seen as an empirical addition to Bayesian reasoning itself. Particularly, it is shown how to view the Born rule as a normative rule in addition to usual Dutch-book coherence. It is a rule that takes into account how one should assign probabilities to the consequences of various intended measurements on a physical system, but explicitly in terms of prior probabilities for and conditional probabilities consequent upon the imagined outcomes of a special counterfactual reference measurement. This interpretation is exemplified by representing quantum states in terms of probabilities for the outcomes of a fixed, fiducial symmetric informationally complete measurement. The extent to which the general form of the new normative rule implies the full state-space structure of quantum mechanics is explored.

  4. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  5. Extended target recognition in cognitive radar networks.

    PubMed

    Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin

    2010-01-01

    We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.

  6. Cost-effective solutions to maintaining smart grid reliability

    NASA Astrophysics Data System (ADS)

    Qin, Qiu

    As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.

  7. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  8. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model

    PubMed Central

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul SF

    2015-01-01

    Background Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. Objective The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. Methods There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric “Screening Efficiency” that were adopted to evaluate model effectiveness. Results Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Conclusions Individuals in China with high suicide probability are recognizable by profile and text-based information from microblogs. Although there is still much space to improve the performance of classification models in the future, this study may shed light on preliminary screening of risky individuals via machine learning algorithms, which can work side-by-side with expert scrutiny to increase efficiency in large-scale-surveillance of suicide probability from online social media. PMID:26543921

  9. Identifying Chinese Microblog Users With High Suicide Probability Using Internet-Based Profile and Linguistic Features: Classification Model.

    PubMed

    Guan, Li; Hao, Bibo; Cheng, Qijin; Yip, Paul Sf; Zhu, Tingshao

    2015-01-01

    Traditional offline assessment of suicide probability is time consuming and difficult in convincing at-risk individuals to participate. Identifying individuals with high suicide probability through online social media has an advantage in its efficiency and potential to reach out to hidden individuals, yet little research has been focused on this specific field. The objective of this study was to apply two classification models, Simple Logistic Regression (SLR) and Random Forest (RF), to examine the feasibility and effectiveness of identifying high suicide possibility microblog users in China through profile and linguistic features extracted from Internet-based data. There were nine hundred and nine Chinese microblog users that completed an Internet survey, and those scoring one SD above the mean of the total Suicide Probability Scale (SPS) score, as well as one SD above the mean in each of the four subscale scores in the participant sample were labeled as high-risk individuals, respectively. Profile and linguistic features were fed into two machine learning algorithms (SLR and RF) to train the model that aims to identify high-risk individuals in general suicide probability and in its four dimensions. Models were trained and then tested by 5-fold cross validation; in which both training set and test set were generated under the stratified random sampling rule from the whole sample. There were three classic performance metrics (Precision, Recall, F1 measure) and a specifically defined metric "Screening Efficiency" that were adopted to evaluate model effectiveness. Classification performance was generally matched between SLR and RF. Given the best performance of the classification models, we were able to retrieve over 70% of the labeled high-risk individuals in overall suicide probability as well as in the four dimensions. Screening Efficiency of most models varied from 1/4 to 1/2. Precision of the models was generally below 30%. Individuals in China with high suicide probability are recognizable by profile and text-based information from microblogs. Although there is still much space to improve the performance of classification models in the future, this study may shed light on preliminary screening of risky individuals via machine learning algorithms, which can work side-by-side with expert scrutiny to increase efficiency in large-scale-surveillance of suicide probability from online social media.

  10. Staged decision making based on probabilistic forecasting

    NASA Astrophysics Data System (ADS)

    Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris

    2016-04-01

    Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in flood event management, the more damage can be reduced. And with decisions based on probabilistic forecasts, partial decisions can be made earlier in time (with a lower probability) and can be scaled up or down later in time when there is more certainty; whether the event takes place or not. Partial decisions are often more cheap, or shorten the final mitigation-time at the moment when there is more certainty. The proposed method is tested on Stonehaven, on the Carron River in Scotland. Decisions to implement demountable defences in the town are currently made based on a very short lead-time due to the absence of certainty. Application showed that staged decision making is possible and gives the decision maker more time to respond to a situation. The decision maker is able to take a lower regret decision with higher uncertainty and less related negative consequences. Although it is not possible to quantify intangible effects, it is part of the analysis to reduce these effects. Above all, the proposed approach has shown to be a possible improvement in economic terms and opens up possibilities of more flexible and robust decision making.

  11. Patients' estimates of their sleep times: reliability and impact on diagnosis of obstructive sleep apnoea.

    PubMed

    Khor, Y H; Tolson, J; Churchward, T; Rochford, P; Worsnop, C

    2015-08-01

    Home polysomnography (PSG) is an alternative method for diagnosis of obstructive sleep apnoea (OSA). Some types 3 and 4 PSG do not monitor sleep and so rely on patients' estimation of total sleep time (TST). To compare patients' subjective sleep duration estimation with objective measures in patients who underwent type 2 PSG for probable OSA. A prospective clinical audit of 536 consecutive patients of one of the authors between 2006 and 2013. A standard questionnaire was completed by the patients the morning after the home PSG to record the time of lights being turned off and estimated time of sleep onset and offset. PSG was scored based on the guidelines of the American Academy of Sleep Medicine. Median estimated sleep latency (SL) was 20 min compared with 10 min for measured SL (P < 0.0001). There was also a significant difference between the estimated and measured sleep offset time (median difference = -1 min, P = 0.01). Estimated TST was significantly shorter than the measured TST (median difference = -18.5 min, P = 0.002). No factors have been identified to affect patients' accuracy of sleep perception. Only 2% of patients had a change in their diagnosis of OSA based on calculated apnoea-hypopnoea index. Overall estimated TST in the patients with probable OSA was significantly shorter than measured with significant individual variability. Collectively, inaccurate sleep time estimation had not resulted in significant difference in the diagnosis of OSA. © 2015 Royal Australasian College of Physicians.

  12. Estimation of the limit of detection using information theory measures.

    PubMed

    Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago

    2014-01-31

    Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Deterministic versus evidence-based attitude towards clinical diagnosis.

    PubMed

    Soltani, Akbar; Moayyeri, Alireza

    2007-08-01

    Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.

  14. Relevance of the correlation between precipitation and the 0 °C isothermal altitude for extreme flood estimation

    NASA Astrophysics Data System (ADS)

    Zeimetz, Fraenz; Schaefli, Bettina; Artigue, Guillaume; García Hernández, Javier; Schleiss, Anton J.

    2017-08-01

    Extreme floods are commonly estimated with the help of design storms and hydrological models. In this paper, we propose a new method to take into account the relationship between precipitation intensity (P) and air temperature (T) to account for potential snow accumulation and melt processes during the elaboration of design storms. The proposed method is based on a detailed analysis of this P-T relationship in the Swiss Alps. The region, no upper precipitation intensity limit is detectable for increasing temperature. However, a relationship between the highest measured temperature before a precipitation event and the duration of the subsequent event could be identified. An explanation for this relationship is proposed here based on the temperature gradient measured before the precipitation events. The relevance of these results is discussed for an example of Probable Maximum Precipitation-Probable Maximum Flood (PMP-PMF) estimation for the high mountainous Mattmark dam catchment in the Swiss Alps. The proposed method to associate a critical air temperature to a PMP is easily transposable to similar alpine settings where meteorological soundings as well as ground temperature and precipitation measurements are available. In the future, the analyses presented here might be further refined by distinguishing between precipitation event types (frontal versus orographic).

  15. Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study

    PubMed Central

    Le Strat, Yann

    2017-01-01

    The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489

  16. A Gibbs sampler for Bayesian analysis of site-occupancy data

    USGS Publications Warehouse

    Dorazio, Robert M.; Rodriguez, Daniel Taylor

    2012-01-01

    1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.

  17. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker

    PubMed Central

    Iasonos, Alexia; Chapman, Paul B.; Satagopan, Jaya M.

    2016-01-01

    There is an increased interest in finding predictive biomarkers that can guide treatment options for both mutation carriers and non-carriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time to event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a Proportional Hazards regression model is commonly used as a measure of variation in treatment benefit. While this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus non-carriers. We illustrate the use and interpretation of the proposed measures using data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. PMID:27141007

  18. Inhomogeneous point-process entropy: An instantaneous measure of complexity in discrete systems

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2014-05-01

    Measures of entropy have been widely used to characterize complexity, particularly in physiological dynamical systems modeled in discrete time. Current approaches associate these measures to finite single values within an observation window, thus not being able to characterize the system evolution at each moment in time. Here, we propose a new definition of approximate and sample entropy based on the inhomogeneous point-process theory. The discrete time series is modeled through probability density functions, which characterize and predict the time until the next event occurs as a function of the past history. Laguerre expansions of the Wiener-Volterra autoregressive terms account for the long-term nonlinear information. As the proposed measures of entropy are instantaneously defined through probability functions, the novel indices are able to provide instantaneous tracking of the system complexity. The new measures are tested on synthetic data, as well as on real data gathered from heartbeat dynamics of healthy subjects and patients with cardiac heart failure and gait recordings from short walks of young and elderly subjects. Results show that instantaneous complexity is able to effectively track the system dynamics and is not affected by statistical noise properties.

  19. Analysis of the trajectory of Drosophila melanogaster in a circular open field arena.

    PubMed

    Valente, Dan; Golani, Ilan; Mitra, Partha P

    2007-10-24

    Obtaining a complete phenotypic characterization of a freely moving organism is a difficult task, yet such a description is desired in many neuroethological studies. Many metrics currently used in the literature to describe locomotor and exploratory behavior are typically based on average quantities or subjectively chosen spatial and temporal thresholds. All of these measures are relatively coarse-grained in the time domain. It is advantageous, however, to employ metrics based on the entire trajectory that an organism takes while exploring its environment. To characterize the locomotor behavior of Drosophila melanogaster, we used a video tracking system to record the trajectory of a single fly walking in a circular open field arena. The fly was tracked for two hours. Here, we present techniques with which to analyze the motion of the fly in this paradigm, and we discuss the methods of calculation. The measures we introduce are based on spatial and temporal probability distributions and utilize the entire time-series trajectory of the fly, thus emphasizing the dynamic nature of locomotor behavior. Marginal and joint probability distributions of speed, position, segment duration, path curvature, and reorientation angle are examined and related to the observed behavior. The measures discussed in this paper provide a detailed profile of the behavior of a single fly and highlight the interaction of the fly with the environment. Such measures may serve as useful tools in any behavioral study in which the movement of a fly is an important variable and can be incorporated easily into many setups, facilitating high-throughput phenotypic characterization.

  20. The impact of differences in EQ-5D and SF-6D utility scores on the acceptability of cost-utility ratios: results across five trial-based cost-utility studies.

    PubMed

    Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies

    2010-01-01

    This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.

  1. [Impact of water pollution risk in water transfer project based on fault tree analysis].

    PubMed

    Liu, Jian-Chang; Zhang, Wei; Wang, Li-Min; Li, Dai-Qing; Fan, Xiu-Ying; Deng, Hong-Bing

    2009-09-15

    The methods to assess water pollution risk for medium water transfer are gradually being explored. The event-nature-proportion method was developed to evaluate the probability of the single event. Fault tree analysis on the basis of calculation on single event was employed to evaluate the extent of whole water pollution risk for the channel water body. The result indicates, that the risk of pollutants from towns and villages along the line of water transfer project to the channel water body is at high level with the probability of 0.373, which will increase pollution to the channel water body at the rate of 64.53 mg/L COD, 4.57 mg/L NH4(+) -N and 0.066 mg/L volatilization hydroxybenzene, respectively. The measurement of fault probability on the basis of proportion method is proved to be useful in assessing water pollution risk under much uncertainty.

  2. High probability neurotransmitter release sites represent an energy efficient design

    PubMed Central

    Lu, Zhongmin; Chouhan, Amit K.; Borycz, Jolanta A.; Lu, Zhiyuan; Rossano, Adam J; Brain, Keith L.; Zhou, You; Meinertzhagen, Ian A.; Macleod, Gregory T.

    2016-01-01

    Nerve terminals contain multiple sites specialized for the release of neurotransmitters. Release usually occurs with low probability, a design thought to confer many advantages. High probability release sites are not uncommon but their advantages are not well understood. Here we test the hypothesis that high probability release sites represent an energy efficient design. We examined release site probabilities and energy efficiency at the terminals of two glutamatergic motor neurons synapsing on the same muscle fiber in Drosophila larvae. Through electrophysiological and ultrastructural measurements we calculated release site probabilities to differ considerably between terminals (0.33 vs. 0.11). We estimated the energy required to release and recycle glutamate from the same measurements. The energy required to remove calcium and sodium ions subsequent to nerve excitation was estimated through microfluorimetric and morphological measurements. We calculated energy efficiency as the number of glutamate molecules released per ATP molecule hydrolyzed, and high probability release site terminals were found to be more efficient (0.13 vs. 0.06). Our analytical model indicates that energy efficiency is optimal (~0.15) at high release site probabilities (~0.76). As limitations in energy supply constrain neural function, high probability release sites might ameliorate such constraints by demanding less energy. Energy efficiency can be viewed as one aspect of nerve terminal function, in balance with others, because high efficiency terminals depress significantly during episodic bursts of activity. PMID:27593375

  3. A statistical method for assessing peptide identification confidence in accurate mass and time tag proteomics

    PubMed Central

    Stanley, Jeffrey R.; Adkins, Joshua N.; Slysz, Gordon W.; Monroe, Matthew E.; Purvine, Samuel O.; Karpievitch, Yuliya V.; Anderson, Gordon A.; Smith, Richard D.; Dabney, Alan R.

    2011-01-01

    Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, as this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referred to as Statistical Tools for AMT tag Confidence (STAC). STAC additionally provides a Uniqueness Probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download as both a command line and a Windows graphical application. PMID:21692516

  4. Investigating the role of Fuzzy as confirmatory tool for service quality assessment (Case study: Comparison of Fuzzy SERVQUAL and SERVQUAL in hotel service evaluation)

    NASA Astrophysics Data System (ADS)

    Wahyudi, R. D.

    2017-11-01

    The problem was because of some indicators qualitatively assessed had been discussed in engineering field. Whereas, qualitative assessment was presently used in certain occasion including in engineering field, for instance, the assessment of service satisfaction. Probably, understanding of satisfaction definition caused bias if customers had their own definition of satisfactory level of service. Therefore, the use of fuzzy logic in SERVQUAL as service satisfaction measurement tool would probably be useful. This paper aimed to investigate the role of fuzzy in SERVQUAL by comparing result measurement of SERVQUAL and fuzzy SERVQUAL for study case of hotel service evaluation. Based on data processing, initial result showed that there was no significant different between them. Thus, either implementation of fuzzy SERVQUAL in different case or study about the role of fuzzy logic in SERVQUAL would be interesting further discussed topic.

  5. Robust Bayesian decision theory applied to optimal dosage.

    PubMed

    Abraham, Christophe; Daurès, Jean-Pierre

    2004-04-15

    We give a model for constructing an utility function u(theta,d) in a dose prescription problem. theta and d denote respectively the patient state of health and the dose. The construction of u is based on the conditional probabilities of several variables. These probabilities are described by logistic models. Obviously, u is only an approximation of the true utility function and that is why we investigate the sensitivity of the final decision with respect to the utility function. We construct a class of utility functions from u and approximate the set of all Bayes actions associated to that class. Then, we measure the sensitivity as the greatest difference between the expected utilities of two Bayes actions. Finally, we apply these results to weighing up a chemotherapy treatment of lung cancer. This application emphasizes the importance of measuring robustness through the utility of decisions rather than the decisions themselves. Copyright 2004 John Wiley & Sons, Ltd.

  6. Fusion-fission Study at JAEA for Heavy-element Synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distribu- tions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their inci- dent energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  7. In-beam fissio study at JAEA for heavy element synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Ackermann, D.; Aritomo, Y.; Comas, V. F.; Düllmann, Ch. E.; Heinz, S.; Heredia, J. A.; Heßberger, F. P.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, M.; Mann, R.; Mitsuoka, S.; Nishinaka, I.; Ohtsuki, T.; Saro, S.; Schädel, M.; Popeko, A. G.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A.

    2013-04-01

    Fission fragment mass distributions were measured in the heavy-ion induced fission using 238U target nucleus. The mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis. Evaporation residue cross sections were calculated with a statistical model in the reactions of 30Si+238U and 34S+238U using the obtained fusion probability in the entrance channel. The results agree with the measured cross sections of 263,264Sg and 267,268Hs, produced by 30Si+238U and 34S+238U, respectively. It is also suggested that the sub-barrier energies can be used for heavy element synthesis.

  8. Measuring change in health status of older adults at the population level: the transition probability model.

    PubMed

    Moineddin, Rahim; Nie, Jason X; Wang, Li; Tracy, C Shawn; Upshur, Ross E G

    2010-11-09

    The current demographic transition will lead to increasing demands on health services. However, debate exists as to the role age plays relative to co-morbidity in terms of health services utilization. While age has been identified as a critical factor in health services utilization, health services utilization is not simply an outcome of ill health, nor is it an inevitable outcome of aging. Most data on health service utilization studies assess utilization at one point in time, and does not examine transitions in health service utilization. We sought to measure health services utilization and to investigate patterns in the transition of levels of utilization and outcomes associated with different levels of utilization. We conducted a population-based retrospective cohort study of all Ontario residents aged 65+ eligible for public healthcare coverage from January 1998-December 2006. The main outcome measure was total number of utilization events. The total is computed by summing, on a per annum basis, the number of family physician visits, specialist visits, Emergency Department visits, drug claims, lab claims, X-rays, CT scans, MRI scans, and inpatient admissions. Three categories of utilization were created: low, moderate, and high. There is heterogeneity in health services utilization across the late lifespan. Utilization increased consistently in the 9-year study period. The probability of remaining at the high utilization category when the person was in the high category the previous year was more than 0.70 for both males and females and for all age groups. Overall healthcare utilization increases more rapidly among the high users compared to the low users. There was negligible probability for moving from high to low utilization category. Probability of death increased exponentially as age increased. Older adults in the low utilization category had the lowest probability of death. The number of male nonagenarians increased more rapidly than female nonagenarians. There are measurable and identifiable differences in the patterns of health services utilization among older adults. This data will permit clinicians and policy makers to tailor interventions appropriate to the risk class of patients.

  9. Risk profiles associated with postnatal depressive symptoms among women in a public sector hospital in Mexico: the role of sociodemographic and psychosocial factors.

    PubMed

    de Castro, Filipa; Place, Jean Marie S; Billings, Deborah L; Rivera, Leonor; Frongillo, Edward A

    2015-06-01

    This study examined the association between postnatal depressive symptoms and a set of demographic and psychosocial factors among 604 women attending a public hospital for postnatal care in Mexico City. Specific profiles of women that would indicate an increased probability for developing postnatal depression (PND) based on discrete combinations of risk and protective factors were generated. In a logistic model, followed by the estimation of predicted probabilities, we examined the association between depressive symptomatology and psychosocial factors: low social support, unplanned pregnancies, history of depression, and exposure to moderate or severe intimate partner violence (IPV) during pregnancy. Postnatal depressive symptomatology was reported by 10.6 % of the women, as measured by scores at 12 or above on the Edinburgh Postnatal Depression Scale. The cumulative probability of presenting PND in the simultaneous presence of the psychosocial factors was 67.0 %; however, this could be reduced to 5.5 % through preventive measures that work to eliminate low social support, unplanned pregnancy, and exposure to severe IPV during pregnancy. Early identification of psychosocial risk factors, specifically low social support, unplanned pregnancies, history of depression, and exposure to violence during pregnancy, is recommended.

  10. Comment on "Measurements without probabilities in the final state proposal"

    NASA Astrophysics Data System (ADS)

    Cohen, Eliahu; Nowakowski, Marcin

    2018-04-01

    The final state proposal [G. T. Horowitz and J. M. Maldacena, J. High Energy Phys. 04 (2004) 008, 10.1088/1126-6708/2004/04/008] is an attempt to relax the apparent tension between string theory and semiclassical arguments regarding the unitarity of black hole evaporation. Authors Bousso and Stanford [Phys. Rev. D 89, 044038 (2014), 10.1103/PhysRevD.89.044038] analyze thought experiments where an infalling observer first verifies the entanglement between early and late Hawking modes and then verifies the interior purification of the same Hawking particle. They claim that "probabilities for outcomes of these measurements are not defined" and therefore suggest that "the final state proposal does not offer a consistent alternative to the firewall hypothesis." We show, in contrast, that one may define all the relevant probabilities based on the so-called ABL rule [Y. Aharonov, P. G. Bergmann, and J. L. Lebowitz, Phys. Rev. 134, B1410 (1964), 10.1103/PhysRev.134.B1410], which is better suited for this task than the decoherence functional. We thus assert that the analysis of Bousso and Stanford cannot yet rule out the final state proposal.

  11. Probabilistic liquefaction hazard analysis at liquefied sites of 1956 Dunaharaszti earthquake, in Hungary

    NASA Astrophysics Data System (ADS)

    Győri, Erzsébet; Gráczer, Zoltán; Tóth, László; Bán, Zoltán; Horváth, Tibor

    2017-04-01

    Liquefaction potential evaluations are generally made to assess the hazard from specific scenario earthquakes. These evaluations may estimate the potential in a binary fashion (yes/no), define a factor of safety or predict the probability of liquefaction given a scenario event. Usually the level of ground shaking is obtained from the results of PSHA. Although it is determined probabilistically, a single level of ground shaking is selected and used within the liquefaction potential evaluation. In contrary, the fully probabilistic liquefaction potential assessment methods provide a complete picture of liquefaction hazard, namely taking into account the joint probability distribution of PGA and magnitude of earthquake scenarios; both of which are key inputs in the stress-based simplified methods. Kramer and Mayfield (2007) has developed a fully probabilistic liquefaction potential evaluation method using a performance-based earthquake engineering (PBEE) framework. The results of the procedure are the direct estimate of the return period of liquefaction and the liquefaction hazard curves in function of depth. The method combines the disaggregation matrices computed for different exceedance frequencies during probabilistic seismic hazard analysis with one of the recent models for the conditional probability of liquefaction. We have developed a software for the assessment of performance-based liquefaction triggering on the basis of Kramer and Mayfield method. Originally the SPT based probabilistic method of Cetin et al. (2004) was built-in into the procedure of Kramer and Mayfield to compute the conditional probability however there is no professional consensus about its applicability. Therefore we have included not only Cetin's method but Idriss and Boulanger (2012) SPT based moreover Boulanger and Idriss (2014) CPT based procedures into our computer program. In 1956, a damaging earthquake of magnitude 5.6 occurred in Dunaharaszti, in Hungary. Its epicenter was located about 5 km from the southern boundary of Budapest. The quake caused serious damages in the epicentral area and in the southern districts of the capital. The epicentral area of the earthquake is located along the Danube River. Sand boils were observed in some locations that indicated the occurrence of liquefaction. Because their exact locations were recorded at the time of the earthquake, in situ geotechnical measurements (CPT and SPT) could be performed at two (Dunaharaszti and Taksony) sites. The different types of measurements enabled the probabilistic liquefaction hazard computations at the two studied sites. We have compared the return periods of liquefaction that were computed using different built-in simplified stress based methods.

  12. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  13. Provable classically intractable sampling with measurement-based computation in constant time

    NASA Astrophysics Data System (ADS)

    Sanders, Stephen; Miller, Jacob; Miyake, Akimasa

    We present a constant-time measurement-based quantum computation (MQC) protocol to perform a classically intractable sampling problem. We sample from the output probability distribution of a subclass of the instantaneous quantum polynomial time circuits introduced by Bremner, Montanaro and Shepherd. In contrast with the usual circuit model, our MQC implementation includes additional randomness due to byproduct operators associated with the computation. Despite this additional randomness we show that our sampling task cannot be efficiently simulated by a classical computer. We extend previous results to verify the quantum supremacy of our sampling protocol efficiently using only single-qubit Pauli measurements. Center for Quantum Information and Control, Department of Physics and Astronomy, University of New Mexico, Albuquerque, NM 87131, USA.

  14. Slider--maximum use of probability information for alignment of short sequence reads and SNP detection.

    PubMed

    Malhis, Nawar; Butterfield, Yaron S N; Ester, Martin; Jones, Steven J M

    2009-01-01

    A plethora of alignment tools have been created that are designed to best fit different types of alignment conditions. While some of these are made for aligning Illumina Sequence Analyzer reads, none of these are fully utilizing its probability (prb) output. In this article, we will introduce a new alignment approach (Slider) that reduces the alignment problem space by utilizing each read base's probabilities given in the prb files. Compared with other aligners, Slider has higher alignment accuracy and efficiency. In addition, given that Slider matches bases with probabilities other than the most probable, it significantly reduces the percentage of base mismatches. The result is that its SNP predictions are more accurate than other SNP prediction approaches used today that start from the most probable sequence, including those using base quality.

  15. A Learning-Based Approach for IP Geolocation

    NASA Astrophysics Data System (ADS)

    Eriksson, Brian; Barford, Paul; Sommers, Joel; Nowak, Robert

    The ability to pinpoint the geographic location of IP hosts is compelling for applications such as on-line advertising and network attack diagnosis. While prior methods can accurately identify the location of hosts in some regions of the Internet, they produce erroneous results when the delay or topology measurement on which they are based is limited. The hypothesis of our work is that the accuracy of IP geolocation can be improved through the creation of a flexible analytic framework that accommodates different types of geolocation information. In this paper, we describe a new framework for IP geolocation that reduces to a machine-learning classification problem. Our methodology considers a set of lightweight measurements from a set of known monitors to a target, and then classifies the location of that target based on the most probable geographic region given probability densities learned from a training set. For this study, we employ a Naive Bayes framework that has low computational complexity and enables additional environmental information to be easily added to enhance the classification process. To demonstrate the feasibility and accuracy of our approach, we test IP geolocation on over 16,000 routers given ping measurements from 78 monitors with known geographic placement. Our results show that the simple application of our method improves geolocation accuracy for over 96% of the nodes identified in our data set, with on average accuracy 70 miles closer to the true geographic location versus prior constraint-based geolocation. These results highlight the promise of our method and indicate how future expansion of the classifier can lead to further improvements in geolocation accuracy.

  16. PROCOS: computational analysis of protein-protein complexes.

    PubMed

    Fink, Florian; Hochrein, Jochen; Wolowski, Vincent; Merkl, Rainer; Gronwald, Wolfram

    2011-09-01

    One of the main challenges in protein-protein docking is a meaningful evaluation of the many putative solutions. Here we present a program (PROCOS) that calculates a probability-like measure to be native for a given complex. In contrast to scores often used for analyzing complex structures, the calculated probabilities offer the advantage of providing a fixed range of expected values. This will allow, in principle, the comparison of models corresponding to different targets that were solved with the same algorithm. Judgments are based on distributions of properties derived from a large database of native and false complexes. For complex analysis PROCOS uses these property distributions of native and false complexes together with a support vector machine (SVM). PROCOS was compared to the established scoring schemes of ZRANK and DFIRE. Employing a set of experimentally solved native complexes, high probability values above 50% were obtained for 90% of these structures. Next, the performance of PROCOS was tested on the 40 binary targets of the Dockground decoy set, on 14 targets of the RosettaDock decoy set and on 9 targets that participated in the CAPRI scoring evaluation. Again the advantage of using a probability-based scoring system becomes apparent and a reasonable number of near native complexes was found within the top ranked complexes. In conclusion, a novel fully automated method is presented that allows the reliable evaluation of protein-protein complexes. Copyright © 2011 Wiley Periodicals, Inc.

  17. A country-wide probability sample of public attitudes toward stuttering in Portugal.

    PubMed

    Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T

    2017-06-01

    Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Probabilistic Risk Analysis of Run-up and Inundation in Hawaii due to Distant Tsunamis

    NASA Astrophysics Data System (ADS)

    Gica, E.; Teng, M. H.; Liu, P. L.

    2004-12-01

    Risk assessment of natural hazards usually includes two aspects, namely, the probability of the natural hazard occurrence and the degree of damage caused by the natural hazard. Our current study is focused on the first aspect, i.e., the development and evaluation of a methodology that can predict the probability of coastal inundation due to distant tsunamis in the Pacific Basin. The calculation of the probability of tsunami inundation could be a simple statistical problem if a sufficiently long record of field data on inundation was available. Unfortunately, such field data are very limited in the Pacific Basin due to the reason that field measurement of inundation requires the physical presence of surveyors on site. In some areas, no field measurements were ever conducted in the past. Fortunately, there are more complete and reliable historical data on earthquakes in the Pacific Basin partly because earthquakes can be measured remotely. There are also numerical simulation models such as the Cornell COMCOT model that can predict tsunami generation by an earthquake, propagation in the open ocean, and inundation onto a coastal land. Our objective is to develop a methodology that can link the probability of earthquakes in the Pacific Basin with the inundation probability in a coastal area. The probabilistic methodology applied here involves the following steps: first, the Pacific Rim is divided into blocks of potential earthquake sources based on the past earthquake record and fault information. Then the COMCOT model is used to predict the inundation at a distant coastal area due to a tsunami generated by an earthquake of a particular magnitude in each source block. This simulation generates a response relationship between the coastal inundation and an earthquake of a particular magnitude and location. Since the earthquake statistics is known for each block, by summing the probability of all earthquakes in the Pacific Rim, the probability of the inundation in a coastal area can be determined through the response relationship. Although the idea of the statistical methodology applied here is not new, this study is the first to apply it to study the probability of inundation caused by earthquake-generated distant tsunamis in the Pacific Basin. As a case study, the methodology is applied to predict the tsunami inundation risk in Hilo Bay in Hawaii. Since relatively more field data on tsunami inundation are available for Hilo Bay, this case study can help to evaluate the applicability of the methodology for predicting tsunami inundation risk in the Pacific Basin. Detailed results will be presented at the AGU meeting.

  19. Description of the lower jaws of Baculites from the Upper Cretaceous U.S. Western Interior

    NASA Astrophysics Data System (ADS)

    Larson, Neal L.; Landman, Neil H.

    2017-03-01

    We report the discovery of lower jaws of Baculites (Ammonoidea) from the Upper Cretaceous U.S. Western Interior. In the lower Campanian Smoky Hill Chalk Member of the Niobrara Chalk of Kansas, most of the jaws occur as isolated elements. Based on their age, they probably belong to Baculites sp. (smooth). They conform to the description of rugaptychus, and are ornamented with coarse rugae on their ventral side. One specimen is preserved inside a small fecal pellet that was probably produced by a fish. Another specimen occurs inside in a crushed body chamber near the aperture and is probably in situ. Three small structures are present immediately behind the jaw and may represent the remains of the gills. In the lower Maastrichtian Pierre Shale of Wyoming, two specimens of Baculites grandis contain lower jaws inside their body chambers, and are probably in situ. In both specimens, the jaws are oriented at an acute angle to the long axis of the shell, with their anterior ends pointing toward the dorsum. One of the jaws is folded into a U-shape, which probably approximates the shape of the jaw during life. Based on the measurements of the jaws and the shape of the shell, the jaws could not have touched the sides of the shell even if they were splayed out, implying that they could not have effectively served as opercula. Instead, in combination with the upper jaws and radula, they constituted the buccal apparatus that collected and conveyed food to the esophagus.

  20. Could our pretest probabilities become evidence based? A prospective survey of hospital practice.

    PubMed

    Richardson, W Scott; Polashenski, Walter A; Robbins, Brett W

    2003-03-01

    We sought to measure the proportion of patients on our clinical service who presented with clinical problems for which research evidence was available to inform estimates of pretest probability. We also aimed to discern whether any of this evidence was of sufficient quality that we would want to use it for clinical decision making. Prospective, consecutive case series and literature survey. Inpatient medical service of a university-affiliated Veterans' Affairs hospital in south Texas. Patients admitted during the 3 study months for diagnostic evaluation. Patients' active clinical problems were identified prospectively and recorded at the time of discharge, transfer, or death. We electronically searched medline and hand-searched bibliographies to find citations that reported research evidence about the frequency of underlying diseases that cause these clinical problems. We critically appraised selected citations and ranked them on a hierarchy of evidence. We admitted 122 patients for diagnostic evaluation, in whom we identified 45 different principal clinical problems. For 35 of the 45 problems (78%; 95% confidence interval [95% CI], 66% to 90%), we found citations that qualified as disease probability evidence. Thus, 111 of our 122 patients (91%; 95% CI, 86% to 96%) had clinical problems for which evidence was available in the medical literature. During 3 months on our hospital medicine service, almost all of the patients admitted for diagnostic evaluation had clinical problems for which evidence is available to guide our estimates of pretest probability. If confirmed by others, these data suggest that clinicians' pretest probabilities could become evidence based.

  1. Using a Betabinomial distribution to estimate the prevalence of adherence to physical activity guidelines among children and youth.

    PubMed

    Garriguet, Didier

    2016-04-01

    Estimates of the prevalence of adherence to physical activity guidelines in the population are generally the result of averaging individual probability of adherence based on the number of days people meet the guidelines and the number of days they are assessed. Given this number of active and inactive days (days assessed minus days active), the conditional probability of meeting the guidelines that has been used in the past is a Beta (1 + active days, 1 + inactive days) distribution assuming the probability p of a day being active is bounded by 0 and 1 and averages 50%. A change in the assumption about the distribution of p is required to better match the discrete nature of the data and to better assess the probability of adherence when the percentage of active days in the population differs from 50%. Using accelerometry data from the Canadian Health Measures Survey, the probability of adherence to physical activity guidelines is estimated using a conditional probability given the number of active and inactive days distributed as a Betabinomial(n, a + active days , β + inactive days) assuming that p is randomly distributed as Beta(a, β) where the parameters a and β are estimated by maximum likelihood. The resulting Betabinomial distribution is discrete. For children aged 6 or older, the probability of meeting physical activity guidelines 7 out of 7 days is similar to published estimates. For pre-schoolers, the Betabinomial distribution yields higher estimates of adherence to the guidelines than the Beta distribution, in line with the probability of being active on any given day. In estimating the probability of adherence to physical activity guidelines, the Betabinomial distribution has several advantages over the previously used Beta distribution. It is a discrete distribution and maximizes the richness of accelerometer data.

  2. Measurement of absolute gamma emission probabilities

    NASA Astrophysics Data System (ADS)

    Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.

    2003-06-01

    The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.

  3. Journal: A Review of Some Tracer-Test Design Equations for ...

    EPA Pesticide Factsheets

    Determination of necessary tracer mass, initial sample-collection time, and subsequent sample-collection frequency are the three most difficult aspects to estimate for a proposed tracer test prior to conducting the tracer test. To facilitate tracer-mass estimation, 33 mass-estimation equations are reviewed here, 32 of which were evaluated using previously published tracer-test design examination parameters. Comparison of the results produced a wide range of estimated tracer mass, but no means is available by which one equation may be reasonably selected over the others. Each equation produces a simple approximation for tracer mass. Most of the equations are based primarily on estimates or measurements of discharge, transport distance, and suspected transport times. Although the basic field parameters commonly employed are appropriate for estimating tracer mass, the 33 equations are problematic in that they were all probably based on the original developers' experience in a particular field area and not necessarily on measured hydraulic parameters or solute-transport theory. Suggested sampling frequencies are typically based primarily on probable transport distance, but with little regard to expected travel times. This too is problematic in that tends to result in false negatives or data aliasing. Simulations from the recently developed efficient hydrologic tracer-test design methodology (EHTD) were compared with those obtained from 32 of the 33 published tracer-

  4. Radiative lifetimes in B I using ultraviolet and vacuum-ultraviolet laser-induced fluorescence

    NASA Technical Reports Server (NTRS)

    O'Brian, T. R.; Lawler, J. E.

    1992-01-01

    Radiative lifetimes of the eight lowest even parity levels in the doublet system of B I are measured using time-resolved laser-induced fluorescence in the UV and VUV on an atomic beam of boron. The accurate lifetimes provide a base for improved determination of absolute transition probabilities in B I. The techniques described are broadly applicable to measurement of lifetimes of levels with transitions in the visible, UV, and VUV in almost any element.

  5. Similarity analysis between quantum images

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  6. Study of heavy-ion induced fission for heavy-element synthesis

    NASA Astrophysics Data System (ADS)

    Nishio, K.; Ikezoe, H.; Hofmann, S.; Heßberger, F. P.; Ackermann, D.; Antalic, S.; Aritomo, Y.; Comas, V. F.; Düllman, Ch. E.; Gorshkov, A.; Graeger, R.; Heinz, S.; Heredia, J. A.; Hirose, K.; Khuyagbaatar, J.; Kindler, B.; Kojouharov, I.; Lommel, B.; Makii, H.; Mann, R.; Mitsuoka, S.; Nagame, Y.; Nishinaka, I.; Ohtsuki, T.; Popeko, A. G.; Saro, S.; Schädel, M.; Türler, A.; Wakabayashi, Y.; Watanabe, Y.; Yakushev, A.; Yeremin, A. V.

    2014-03-01

    Fission fragment mass distributions were measured in heavy-ion induced fissions using 238U target nucleus. The measured mass distributions changed drastically with incident energy. The results are explained by a change of the ratio between fusion and qasifission with nuclear orientation. A calculation based on a fluctuation dissipation model reproduced the mass distributions and their incident energy dependence. Fusion probability was determined in the analysis, and the values were consistent with those determined from the evaporation residue cross sections.

  7. Volume Changes After Stereotactic LINAC Radiotherapy in Vestibular Schwannoma: Control Rate and Growth Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langenberg, Rick van de, E-mail: rickvandelangenberg@hotmail.com; Dohmen, Amy J.C.; Bondt, Bert J. de

    2012-10-01

    Purpose: The purpose of this study was to evaluate the control rate of vestibular schwannomas (VS) after treatment with linear accelerator (LINAC)-based stereotactic radiosurgery (SRS) or radiotherapy (SRT) by using a validated volumetric measuring tool. Volume-based studies on prognosis after LINAC-based SRS or SRT for VS are reported scarcely. In addition, growth patterns and risk factors predicting treatment failure were analyzed. Materials and Methods: Retrospectively, 37 VS patients treated with LINAC based SRS or SRT were analyzed. Baseline and follow-up magnetic resonance imaging scans were analyzed with volume measurements on contrast enhanced T1-weighted magnetic resonance imaging. Absence of intervention aftermore » radiotherapy was defined as 'no additional intervention group, ' absence of radiological growth was defined as 'radiological control group. ' Significant growth was defined as a volume change of 19.7% or more, as calculated in a previous study. Results: The cumulative 4-year probability of no additional intervention was 96.4% {+-} 0.03; the 4-year radiological control probability was 85.4% {+-} 0.1). The median follow-up was 40 months. Overall, shrinkage was seen in 65%, stable VS in 22%, and growth in 13%. In 54% of all patients, transient swelling was observed. No prognostic factors were found regarding VS growth. Previous treatment and SRS were associated with transient swelling significantly. Conclusions: Good control rates are reported for LINAC based SRS or SRT in VS, in which the lower rate of radiological growth control is attributed to the use of the more sensitive volume measurements. Transient swelling after radiosurgery is a common phenomenon and should not be mistaken for treatment failure. Previous treatment and SRS were significantly associated with transient swelling.« less

  8. Measures of School Integration: Comparing Coleman's Index to Measures of Species Diversity.

    ERIC Educational Resources Information Center

    Mercil, Steven Bray; Williams, John Delane

    This study used species diversity indices developed in ecology as a measure of socioethnic diversity, and compared them to Coleman's Index of Segregation. The twelve indices were Simpson's Concentration Index ("ell"), Simpson's Index of Diversity, Hurlbert's Probability of Interspecific Encounter (PIE), Simpson's Probability of…

  9. Psychological and Social Risk Factors in Adolescent Smoking Transitions: A Population-Based Longitudinal Study

    PubMed Central

    Bricker, Jonathan B.; Rajan, K. Bharat; Zalewski, Maureen; Andersen, M. Robyn; Ramey, Madelaine; Peterson, Arthur V.

    2009-01-01

    Objective This study longitudinally investigated psychological and social risk factors consistent with the Theory of Triadic Influence (TTI) as predictors of adolescent smoking transitions. Design Among 4218 adolescents, five psychological risk factors (i.e., parent-noncompliance, friend-compliance, rebelliousness, low achievement motivation, and thrill seeking) were assessed in 9th grade (age 14), two social influence risk factors (i.e., parents’ and close friends’ smoking) were assessed in grades 3 (age 8) and 9 (age 14), respectively. Main Outcome Measures Adolescent smoking transitions occurring between the 9th and 12th (ages 14–17) grade interval. Results There was a 22–27% probability contributed by scoring high on each of these psychological risk factors to the overall probability that an adolescent would try smoking. For predicting trying smoking, the probability contributed by these psychological factors was greater than the probability contributed by each parent’s and close friend’s smoking. Parent-compliance had a higher contribution to the probability of trying smoking when an adolescent’s parent smoked (p < .05), while friend-compliance had a higher contribution to the probability of trying smoking when an adolescent’s friend smoked (p<.001). Conclusion These psychological and social factors have an important influence on adolescent smoking transitions. Implications for TTI and smoking prevention interventions are discussed. PMID:19594268

  10. AGE-SPECIFIC PROBABILITY OF LIVE-BIRTH WITH OOCYTE CRYOPRESERVATION: AN INDIVIDUAL PATIENT DATA META-ANALYSIS

    PubMed Central

    CIL, AYLIN PELIN; BANG, HEEJUNG; OKTAY, KUTLUK

    2013-01-01

    Objective To estimate age-specific probabilities of live-birth with oocyte cryopreservation in non-donor (ND) egg cycles. Design Individual patient data (IPD) meta-analysis. Setting Assisted reproduction centers. Patients Infertile patients undergoing ND mature oocyte cryopreservation. Interventions PubMed was searched for the clinical studies on oocyte cryopreservation from January 1996 through July 2011. Randomized and non-randomized studies that used ND frozen-thawed mature oocytes with pregnancy outcomes were included. Authors of eligible studies were contacted to obtain IPD. Main outcome measures Live-birth probabilities based on age, cryopreservation method, and the number of oocytes thawed, injected, or embryos transferred. Results Original data from 10 studies including 2265 cycles from 1805 patients were obtained. Live-birth success rates declined with age regardless of the freezing technique. Despite this age-induced compromise, live-births continued to occur as late as to the ages of 42 and 44 with slowly-frozen (SF) and vitrified (VF) oocytes, respectively. Estimated probabilities of live-birth for VF oocytes were higher than those for SF. Conclusions The live-birth probabilities we calculated would enable more accurate counseling and informed decision of infertile women who consider oocyte cryopreservation. Given the success probabilities, we suggest that policy-makers should consider oocyte freezing as an integral part of prevention and treatment of infertility. PMID:23706339

  11. UQ for Decision Making: How (at least five) Kinds of Probability Might Come Into Play

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.

  12. How weak values emerge in joint measurements on cloned quantum systems.

    PubMed

    Hofmann, Holger F

    2012-07-13

    A statistical analysis of optimal universal cloning shows that it is possible to identify an ideal (but nonpositive) copying process that faithfully maps all properties of the original Hilbert space onto two separate quantum systems, resulting in perfect correlations for all observables. The joint probabilities for noncommuting measurements on separate clones then correspond to the real parts of the complex joint probabilities observed in weak measurements on a single system, where the measurements on the two clones replace the corresponding sequence of weak measurement and postselection. The imaginary parts of weak measurement statics can be obtained by replacing the cloning process with a partial swap operation. A controlled-swap operation combines both processes, making the complete weak measurement statistics accessible as a well-defined contribution to the joint probabilities of fully resolved projective measurements on the two output systems.

  13. Improving the Rank Precision of Population Health Measures for Small Areas with Longitudinal and Joint Outcome Models

    PubMed Central

    Athens, Jessica K.; Remington, Patrick L.; Gangnon, Ronald E.

    2015-01-01

    Objectives The University of Wisconsin Population Health Institute has published the County Health Rankings since 2010. These rankings use population-based data to highlight health outcomes and the multiple determinants of these outcomes and to encourage in-depth health assessment for all United States counties. A significant methodological limitation, however, is the uncertainty of rank estimates, particularly for small counties. To address this challenge, we explore the use of longitudinal and pooled outcome data in hierarchical Bayesian models to generate county ranks with greater precision. Methods In our models we used pooled outcome data for three measure groups: (1) Poor physical and poor mental health days; (2) percent of births with low birth weight and fair or poor health prevalence; and (3) age-specific mortality rates for nine age groups. We used the fixed and random effects components of these models to generate posterior samples of rates for each measure. We also used time-series data in longitudinal random effects models for age-specific mortality. Based on the posterior samples from these models, we estimate ranks and rank quartiles for each measure, as well as the probability of a county ranking in its assigned quartile. Rank quartile probabilities for univariate, joint outcome, and/or longitudinal models were compared to assess improvements in rank precision. Results The joint outcome model for poor physical and poor mental health days resulted in improved rank precision, as did the longitudinal model for age-specific mortality rates. Rank precision for low birth weight births and fair/poor health prevalence based on the univariate and joint outcome models were equivalent. Conclusion Incorporating longitudinal or pooled outcome data may improve rank certainty, depending on characteristics of the measures selected. For measures with different determinants, joint modeling neither improved nor degraded rank precision. This approach suggests a simple way to use existing information to improve the precision of small-area measures of population health. PMID:26098858

  14. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).

  15. Command and Control Systems Requirements Analysis. Volume 2. Measuring C2 Effectiveness with Decision Probability

    DTIC Science & Technology

    1990-09-01

    MEASURING C2 EFFECTIVENESS WUIlT DECISION PROBABILITY SEPTEMBER 1990 TABLE OF CONTENTS 1.0 IN T R O D U CTIIO N...i ’i | " i | , TABLE OF CONTENTS (Continued) Ra. 5.0 EXPRESSING REQUIREMENTS WITH PROBABILITY .................................... 15 5.1...gaitrering and maintaining the data needed, and complfeting and reviewing the coiierction of Intoirmallon Send continnts regarding MAr burden asilmate o

  16. Observation of non-classical correlations in sequential measurements of photon polarization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.

    2016-10-01

    A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.

  17. Exploration of multiphoton entangled states by using weak nonlinearities

    PubMed Central

    He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting

    2016-01-01

    We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044

  18. Profit intensity and cases of non-compliance with the law of demand/supply

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Piotrowski, Edward W.; Sładkowski, Jan; Syska, Jacek

    2017-05-01

    We consider properties of the measurement intensity ρ of a random variable for which the probability density function represented by the corresponding Wigner function attains negative values on a part of the domain. We consider a simple economic interpretation of this problem. This model is used to present the applicability of the method to the analysis of the negative probability on markets where there are anomalies in the law of supply and demand (e.g. Giffen's goods). It turns out that the new conditions to optimize the intensity ρ require a new strategy. We propose a strategy (so-called à rebours strategy) based on the fixed point method and explore its effectiveness.

  19. Post-test probability for neonatal hyperbilirubinemia based on umbilical cord blood bilirubin, direct antiglobulin test, and ABO compatibility results.

    PubMed

    Peeters, Bart; Geerts, Inge; Van Mullem, Mia; Micalessi, Isabel; Saegeman, Veroniek; Moerman, Jan

    2016-05-01

    Many hospitals opt for early postnatal discharge of newborns with a potential risk of readmission for neonatal hyperbilirubinemia. Assays/algorithms with the possibility to improve prediction of significant neonatal hyperbilirubinemia are needed to optimize screening protocols and safe discharge of neonates. This study investigated the predictive value of umbilical cord blood (UCB) testing for significant hyperbilirubinemia. Neonatal UCB bilirubin, UCB direct antiglobulin test (DAT), and blood group were determined, as well as the maternal blood group and the red blood cell antibody status. Moreover, in newborns with clinically apparent jaundice after visual assessment, plasma total bilirubin (TB) was measured. Clinical factors positively associated with UCB bilirubin were ABO incompatibility, positive DAT, presence of maternal red cell antibodies, alarming visual assessment and significant hyperbilirubinemia in the first 6 days of life. UCB bilirubin performed clinically well with an area under the receiver-operating characteristic curve (AUC) of 0.82 (95 % CI 0.80-0.84). The combined UCB bilirubin, DAT, and blood group analysis outperformed results of these parameters considered separately to detect significant hyperbilirubinemia and correlated exponentially with hyperbilirubinemia post-test probability. Post-test probabilities for neonatal hyperbilirubinemia can be calculated using exponential functions defined by UCB bilirubin, DAT, and ABO compatibility results. • The diagnostic value of the triad umbilical cord blood bilirubin measurement, direct antiglobulin testing and blood group analysis for neonatal hyperbilirubinemia remains unclear in literature. • Currently no guideline recommends screening for hyperbilirubinemia using umbilical cord blood. What is New: • Post-test probability for hyperbilirubinemia correlated exponentially with umbilical cord blood bilirubin in different risk groups defined by direct antiglobulin test and ABO blood group compatibility results. • Exponential functions can be used to calculate hyperbilirubinemia probability.

  20. Establishing endangered species recovery criteria using predictive simulation modeling

    USGS Publications Warehouse

    McGowan, Conor P.; Catlin, Daniel H.; Shaffer, Terry L.; Gratto-Trevor, Cheri L.; Aron, Carol

    2014-01-01

    Listing a species under the Endangered Species Act (ESA) and developing a recovery plan requires U.S. Fish and Wildlife Service to establish specific and measurable criteria for delisting. Generally, species are listed because they face (or are perceived to face) elevated risk of extinction due to issues such as habitat loss, invasive species, or other factors. Recovery plans identify recovery criteria that reduce extinction risk to an acceptable level. It logically follows that the recovery criteria, the defined conditions for removing a species from ESA protections, need to be closely related to extinction risk. Extinction probability is a population parameter estimated with a model that uses current demographic information to project the population into the future over a number of replicates, calculating the proportion of replicated populations that go extinct. We simulated extinction probabilities of piping plovers in the Great Plains and estimated the relationship between extinction probability and various demographic parameters. We tested the fit of regression models linking initial abundance, productivity, or population growth rate to extinction risk, and then, using the regression parameter estimates, determined the conditions required to reduce extinction probability to some pre-defined acceptable threshold. Binomial regression models with mean population growth rate and the natural log of initial abundance were the best predictors of extinction probability 50 years into the future. For example, based on our regression models, an initial abundance of approximately 2400 females with an expected mean population growth rate of 1.0 will limit extinction risk for piping plovers in the Great Plains to less than 0.048. Our method provides a straightforward way of developing specific and measurable recovery criteria linked directly to the core issue of extinction risk. Published by Elsevier Ltd.

  1. Bacterial adhesion forces to Ag-impregnated contact lens cases and transmission to contact lenses.

    PubMed

    Qu, Wenwen; Busscher, Henk J; van der Mei, Henny C; Hooymans, Johanna M M

    2013-03-01

    To measure adhesion forces of Pseudomonas aeruginosa, Staphylococcus aureus, and Serratia marcescens to a rigid contact lens (CL), standard polypropylene, and Ag-impregnated lens cases using atomic force microscopy and determine bacterial transmission from lens case to CL. Adhesion forces of bacterial strains to Ag-impregnated and polypropylene lens cases and a rigid CL were measured using atomic force microscopy. Adhesion forces were used to calculate Weibull distributions, from which transmission probabilities from lens case to CL were derived. Transmission probabilities were compared with actual transmission of viable bacteria from a lens case to the CL in 0.9% NaCl and in an antimicrobial lens care solution. Bacterial transmission probabilities from polypropylene lens cases based on force analysis coincided well for all strains with actual transmission in 0.9% NaCl. Bacterial adhesion forces on Ag-impregnated lens cases were much smaller than that on polypropylene and CLs, yielding a high probability of transmission. Comparison with actual bacterial transmission indicated bacterial killing due to Ag ions during colony-forming unit transmission from an Ag-impregnated lens case, especially for P. aeruginosa. Transmission of viable bacteria from Ag-impregnated lens cases could be further decreased by use of an antimicrobial lens care solution instead of 0.9% NaCl. Bacterial transmission probabilities are higher from Ag-impregnated lens cases than from polypropylene lens cases because of small adhesion forces, but this is compensated for by enhanced bacterial killing due to Ag impregnation, especially when in combination with an antimicrobial lens care solution. This calls for a balanced combination of antimicrobial lens care solutions and surface properties of a lens case and CL.

  2. Effect of social influence on effort-allocation for monetary rewards.

    PubMed

    Gilman, Jodi M; Treadway, Michael T; Curran, Max T; Calderon, Vanessa; Evins, A Eden

    2015-01-01

    Though decades of research have shown that people are highly influenced by peers, few studies have directly assessed how the value of social conformity is weighed against other types of costs and benefits. Using an effort-based decision-making paradigm with a novel social influence manipulation, we measured how social influence affected individuals' decisions to allocate effort for monetary rewards during trials with either high or low probability of receiving a reward. We found that information about the effort-allocation of peers modulated participant choices, specifically during conditions of low probability of obtaining a reward. This suggests that peer influence affects effort-based choices to obtain rewards especially under conditions of risk. This study provides evidence that people value social conformity in addition to other costs and benefits when allocating effort, and suggests that neuroeconomic studies that assess trade-offs between effort and reward should consider social environment as a factor that can influence decision-making.

  3. A consensus-based dynamics for market volumes

    NASA Astrophysics Data System (ADS)

    Sabatelli, Lorenzo; Richmond, Peter

    2004-12-01

    We develop a model of trading orders based on opinion dynamics. The agents may be thought as the share holders of a major mutual fund rather than as direct traders. The balance between their buy and sell orders determines the size of the fund order (volume) and has an impact on prices and indexes. We assume agents interact simultaneously to each other through a Sznajd-like interaction. Their degree of connection is determined by the probability of changing opinion independently of what their neighbours are doing. We assume that such a probability may change randomly, after each transaction, of an amount proportional to the relative difference between the volatility then measured and a benchmark that we assume to be an exponential moving average of the past volume values. We show how this simple model is compatible with some of the main statistical features observed for the asset volumes in financial markets.

  4. Gesture Recognition Based on the Probability Distribution of Arm Trajectories

    NASA Astrophysics Data System (ADS)

    Wan, Khairunizam; Sawada, Hideyuki

    The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.

  5. On fitting the Pareto Levy distribution to stock market index data: Selecting a suitable cutoff value

    NASA Astrophysics Data System (ADS)

    Coronel-Brizio, H. F.; Hernández-Montoya, A. R.

    2005-08-01

    The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).

  6. Systematic design for trait introgression projects.

    PubMed

    Cameron, John N; Han, Ye; Wang, Lizhi; Beavis, William D

    2017-10-01

    Using an Operations Research approach, we demonstrate design of optimal trait introgression projects with respect to competing objectives. We demonstrate an innovative approach for designing Trait Introgression (TI) projects based on optimization principles from Operations Research. If the designs of TI projects are based on clear and measurable objectives, they can be translated into mathematical models with decision variables and constraints that can be translated into Pareto optimality plots associated with any arbitrary selection strategy. The Pareto plots can be used to make rational decisions concerning the trade-offs between maximizing the probability of success while minimizing costs and time. The systematic rigor associated with a cost, time and probability of success (CTP) framework is well suited to designing TI projects that require dynamic decision making. The CTP framework also revealed that previously identified 'best' strategies can be improved to be at least twice as effective without increasing time or expenses.

  7. Heritable DNA methylation marks associated with susceptibility to breast cancer.

    PubMed

    Joo, Jihoon E; Dowty, James G; Milne, Roger L; Wong, Ee Ming; Dugué, Pierre-Antoine; English, Dallas; Hopper, John L; Goldgar, David E; Giles, Graham G; Southey, Melissa C

    2018-02-28

    Mendelian-like inheritance of germline DNA methylation in cancer susceptibility genes has been previously reported. We aimed to scan the genome for heritable methylation marks associated with breast cancer susceptibility by studying 25 Australian multiple-case breast cancer families. Here we report genome-wide DNA methylation measured in 210 peripheral blood DNA samples provided by family members using the Infinium HumanMethylation450. We develop and apply a new statistical method to identify heritable methylation marks based on complex segregation analysis. We estimate carrier probabilities for the 1000 most heritable methylation marks based on family structure, and we use Cox proportional hazards survival analysis to identify 24 methylation marks with corresponding carrier probabilities significantly associated with breast cancer. We replicate an association with breast cancer risk for four of the 24 marks using an independent nested case-control study. Here, we report a novel approach for identifying heritable DNA methylation marks associated with breast cancer risk.

  8. Distribution of Base Pair Alternations in a Periodic DNA Chain: Application of Pólya Counting to a Physical System

    NASA Astrophysics Data System (ADS)

    Hillebrand, Malcolm; Paterson-Jones, Guy; Kalosakas, George; Skokos, Charalampos

    2018-03-01

    In modeling DNA chains, the number of alternations between Adenine-Thymine (AT) and Guanine-Cytosine (GC) base pairs can be considered as a measure of the heterogeneity of the chain, which in turn could affect its dynamics. A probability distribution function of the number of these alternations is derived for circular or periodic DNA. Since there are several symmetries to account for in the periodic chain, necklace counting methods are used. In particular, Polya's Enumeration Theorem is extended for the case of a group action that preserves partitioned necklaces. This, along with the treatment of generating functions as formal power series, allows for the direct calculation of the number of possible necklaces with a given number of AT base pairs, GC base pairs and alternations. The theoretically obtained probability distribution functions of the number of alternations are accurately reproduced by Monte Carlo simulations and fitted by Gaussians. The effect of the number of base pairs on the characteristics of these distributions is also discussed, as well as the effect of the ratios of the numbers of AT and GC base pairs.

  9. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  10. Using risk-based analysis and geographic information systems to assess flooding problems in an urban watershed in Rhode Island.

    PubMed

    Hardmeyer, Kent; Spencer, Michael A

    2007-04-01

    This article provides an overview of the use of risk-based analysis (RBA) in flood damage assessment, and it illustrates the use of Geographic Information Systems (GIS) in identifying flood-prone areas, which can aid in flood-mitigation planning assistance. We use RBA to calculate expected annual flood damages in an urban watershed in the state of Rhode Island, USA. The method accounts for the uncertainty in the three primary relationships used in computing flood damage: (1) the probability that a given flood will produce a given amount of floodwater, (2) the probability that a given amount of floodwater will reach a certain stage or height, and (3) the probability that a certain stage of floodwater will produce a given amount of damage. A greater than 50% increase in expected annual flood damage is estimated for the future if previous development patterns continue and flood-mitigation measures are not taken. GIS is then used to create a map that shows where and how often floods might occur in the future, which can help (1) identify priority areas for flood-mitigation planning assistance and (2) disseminate information to public officials and other decision-makers.

  11. Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Schmittner, A.; Urban, N.; Shakun, J. D.; Mahowald, N. M.; Clark, P. U.; Bartlein, P. J.; Mix, A. C.; Rosell-Melé, A.

    2011-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.

  12. Experiment K-317: Bone resorption in rats during spaceflight

    NASA Technical Reports Server (NTRS)

    Cann, C. E.; Adachi, R. R.

    1981-01-01

    Direct measurement of bone resorption in flight and synchronous control rats is described. Continuous tracer administration techniques were used, with replacement of dietary calcium with isotopically enriched Ca40 and measurement by neutron activation analysis of the Ca48 released by the skeleton. There is no large change in bone resorption in rats. Based on the time course of changes, the measured 20-25% decrease in resorption is probably secondary to a decrease in total body calcium turnover. The excretion of sodium, potassium and zinc all increase during flight, sodium and potassium to a level 4-5 times control values.

  13. Full statistical mode reconstruction of a light field via a photon-number-resolved measurement

    NASA Astrophysics Data System (ADS)

    Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.

    2017-05-01

    We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.

  14. Base pair probability estimates improve the prediction accuracy of RNA non-canonical base pairs

    PubMed Central

    2017-01-01

    Prediction of RNA tertiary structure from sequence is an important problem, but generating accurate structure models for even short sequences remains difficult. Predictions of RNA tertiary structure tend to be least accurate in loop regions, where non-canonical pairs are important for determining the details of structure. Non-canonical pairs can be predicted using a knowledge-based model of structure that scores nucleotide cyclic motifs, or NCMs. In this work, a partition function algorithm is introduced that allows the estimation of base pairing probabilities for both canonical and non-canonical interactions. Pairs that are predicted to be probable are more likely to be found in the true structure than pairs of lower probability. Pair probability estimates can be further improved by predicting the structure conserved across multiple homologous sequences using the TurboFold algorithm. These pairing probabilities, used in concert with prior knowledge of the canonical secondary structure, allow accurate inference of non-canonical pairs, an important step towards accurate prediction of the full tertiary structure. Software to predict non-canonical base pairs and pairing probabilities is now provided as part of the RNAstructure software package. PMID:29107980

  15. TH-A-BRF-02: BEST IN PHYSICS (JOINT IMAGING-THERAPY) - Modeling Tumor Evolution for Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Lee, CG; Chan, TCY

    2014-06-15

    Purpose: To develop mathematical models of tumor geometry changes under radiotherapy that may support future adaptive paradigms. Methods: A total of 29 cervical patients were scanned using MRI, once for planning and weekly thereafter for treatment monitoring. Using the tumor volumes contoured by a radiologist, three mathematical models were investigated based on the assumption of a stochastic process of tumor evolution. The “weekly MRI” model predicts tumor geometry for the following week from the last two consecutive MRI scans, based on the voxel transition probability. The other two models use only the first pair of consecutive MRI scans, and themore » transition probabilities were estimated via tumor type classified from the entire data set. The classification is based on either measuring the tumor volume (the “weekly volume” model), or implementing an auxiliary “Markov chain” model. These models were compared to a constant volume approach that represents the current clinical practice, using various model parameters; e.g., the threshold probability β converts the probability map into a tumor shape (larger threshold implies smaller tumor). Model performance was measured using volume conformity index (VCI), i.e., the union of the actual target and modeled target volume squared divided by product of these two volumes. Results: The “weekly MRI” model outperforms the constant volume model by 26% on average, and by 103% for the worst 10% of cases in terms of VCI under a wide range of β. The “weekly volume” and “Markov chain” models outperform the constant volume model by 20% and 16% on average, respectively. They also perform better than the “weekly MRI” model when β is large. Conclusion: It has been demonstrated that mathematical models can be developed to predict tumor geometry changes for cervical cancer undergoing radiotherapy. The models can potentially support adaptive radiotherapy paradigm by reducing normal tissue dose. This research was supported in part by the Ontario Consortium for Adaptive Interventions in Radiation Oncology (OCAIRO) funded by the Ontario Research Fund (ORF) and the MITACS Accelerate Internship Program.« less

  16. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  17. [The percutaneous effect of a heparin-allantoin-dexpanthenol combination in a specific ointment base. Anti-allergic, anti-inflammatory effect in the PCA test in the rat].

    PubMed

    Rudolph, C; Tauschel, H D

    1984-01-01

    Local application of a heparin-allantoin-dexpanthenol (Hepathrombin-Adenylchemie) ointment to rats 15 min prior to induction of a passive cutaneous anaphylaxis (PCA) reaction inhibits the anaphylactic reaction as compared to the ointment base. Also, the Evans Blue content as a measure for vascular permeability and the oedema weights are reduced under the heparin containing ointment. The antiallergic/antiinflammatory effect is probably due to heparin.

  18. Portfolios in Stochastic Local Search: Efficiently Computing Most Probable Explanations in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Roth, Dan; Wilkins, David C.

    2001-01-01

    Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function - i.e. a ratio of two polynomials - of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGSs performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.

  19. Probabilistic properties of wavelets in kinetic surface roughening

    NASA Astrophysics Data System (ADS)

    Bershadskii, A.

    2001-08-01

    Using the data of a recent numerical simulation [M. Ahr and M. Biehl, Phys. Rev. E 62, 1773 (2000)] of homoepitaxial growth it is shown that the observed probability distribution of a wavelet based measure of the growing surface roughness is consistent with a stretched log-normal distribution and the corresponding branching dimension depends on the level of particle desorption.

  20. Confidence Intervals for Proportion Estimates in Complex Samples. Research Report. ETS RR-06-21

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    Confidence intervals are an important tool to indicate uncertainty of estimates and to give an idea of probable values of an estimate if a different sample from the population was drawn or a different sample of measures was used. Standard symmetric confidence intervals for proportion estimates based on a normal approximation can yield bounds…

  1. Substance Use and Dependency Disorders in Adolescent Girls in Group Living Programs: Prevalence and Associations with Milieu Factors

    ERIC Educational Resources Information Center

    Baker, Amy J. L.; Ashare, Caryn; Charvat, Benjamin J.

    2009-01-01

    Fifty-three adolescent girls residing in community-based group-living child welfare programs were administered a standardized measure (SASS-2) in order to assess probability of a substance use/dependency disorder in this highly vulnerable population. Findings revealed that one third of the sample, and one half of the nonpregnant/parenting girls,…

  2. Measuring Forest Area Loss Over Time Using FIA Plots and Satellite Imagery

    Treesearch

    Michael L. Hoppus; Andrew J. Lister

    2005-01-01

    How accurately can FIA plots, scattered at 1 per 6,000 acres, identify often rare forest land loss, estimated at less than 1 percent per year in the Northeast? Here we explore this question mathematically, empirically, and by comparing FIA plot estimates of forest change with satellite image based maps of forest loss. The mathematical probability of exactly estimating...

  3. SIMP J013656.5+093347 Is Likely a Planetary-mass Object in the Carina-Near Moving Group

    NASA Astrophysics Data System (ADS)

    Gagné, Jonathan; Faherty, Jacqueline K.; Burgasser, Adam J.; Artigau, Étienne; Bouchard, Sandie; Albert, Loïc; Lafrenière, David; Doyon, René; Bardalez Gagliuffi, Daniella C.

    2017-05-01

    We report on the discovery that the nearby (˜6 pc) photometrically variable T2.5 dwarf SIMP J013656.5+093347 is a likely member of the ˜200 Myr old Carina-Near moving group with a probability of >99.9% based on its full kinematics. Our v\\sin I measurement of 50.9 ± 0.8 km s-1 combined with the known rotation period inferred from variability measurements provide a lower limit of 1.01 ± 0.02 {R}{Jup} on the radius of SIMP 0136+0933, an independent verification that it must be younger than ˜950 Myr, according to evolution models. We estimate a field interloper probability of 0.2% based on the density of field T0-T5 dwarfs. At the age of Carina-Near, SIMP 0136+0933 has an estimated mass of 12.7 ± 1.0 {M}{Jup} and is predicted to have burned roughly half of its original deuterium. SIMP 0136+0933 is the closest known young moving group member to the Sun and is one of only a few known young T dwarfs, making it an important benchmark for understanding the atmospheres of young planetary-mass objects.

  4. Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0.

    PubMed

    Hernández-Solana, Ángela; Perez-Diaz-de-Cerio, David; Valdovinos, Antonio; Valenzuela, Jose Luis

    2017-08-30

    The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios.

  5. Proposal and Evaluation of BLE Discovery Process Based on New Features of Bluetooth 5.0

    PubMed Central

    2017-01-01

    The device discovery process is one of the most crucial aspects in real deployments of sensor networks. Recently, several works have analyzed the topic of Bluetooth Low Energy (BLE) device discovery through analytical or simulation models limited to version 4.x. Non-connectable and non-scannable undirected advertising has been shown to be a reliable alternative for discovering a high number of devices in a relatively short time period. However, new features of Bluetooth 5.0 allow us to define a variant on the device discovery process, based on BLE scannable undirected advertising events, which results in higher discovering capacities and also lower power consumption. In order to characterize this new device discovery process, we experimentally model the real device behavior of BLE scannable undirected advertising events. Non-detection packet probability, discovery probability, and discovery latency for a varying number of devices and parameters are compared by simulations and experimental measurements. We demonstrate that our proposal outperforms previous works, diminishing the discovery time and increasing the potential user device density. A mathematical model is also developed in order to easily obtain a measure of the potential capacity in high density scenarios. PMID:28867786

  6. Anatomy-based eligibility measure for robotic-assisted bypass surgery.

    PubMed

    Escoto, Abelardo; Trejos, Ana Luisa; Patel, Rajni V; Goela, Aashish; Kiaii, Bob

    2014-01-01

    Robotic-assisted endoscopic single-vessel small thoracotomy allows clinicians to perform coronary artery bypass grafting surgery in a minimally invasive manner using the da Vinci Surgical System. Not all patients are suitable for this technique, and the lack of an appropriate method for patient eligibility avoids completion of the procedure robotically. The objective of this study was to develop a patient eligibility method based on the anatomy of the chest of the patient. Preoperative computed tomography thorax scans of 110 patients were analyzed. Two-dimensional measurements taken on the axial images were used with the goal of finding a relation between the anatomy of the patient and the completion of the procedure robotically. Patients with a distance from the left anterior descending coronary artery to the anterior chest wall of smaller than 15 mm have a 20% probability of requiring conversion of the procedure to open surgery. This probability increases if the chest of the patient is very elliptical, having an anterior-posterior dimension of less than 45% of the transverse dimension. The smaller the distance is from the left anterior descending artery to the anterior chest wall, the lower the chances are of completing the procedure robotically.

  7. Cost-Effectiveness and Cost-Utility of Internet-Based Computer Tailoring for Smoking Cessation

    PubMed Central

    Evers, Silvia MAA; de Vries, Hein; Hoving, Ciska

    2013-01-01

    Background Although effective smoking cessation interventions exist, information is limited about their cost-effectiveness and cost-utility. Objective To assess the cost-effectiveness and cost-utility of an Internet-based multiple computer-tailored smoking cessation program and tailored counseling by practice nurses working in Dutch general practices compared with an Internet-based multiple computer-tailored program only and care as usual. Methods The economic evaluation was embedded in a randomized controlled trial, for which 91 practice nurses recruited 414 eligible smokers. Smokers were randomized to receive multiple tailoring and counseling (n=163), multiple tailoring only (n=132), or usual care (n=119). Self-reported cost and quality of life were assessed during a 12-month follow-up period. Prolonged abstinence and 24-hour and 7-day point prevalence abstinence were assessed at 12-month follow-up. The trial-based economic evaluation was conducted from a societal perspective. Uncertainty was accounted for by bootstrapping (1000 times) and sensitivity analyses. Results No significant differences were found between the intervention arms with regard to baseline characteristics or effects on abstinence, quality of life, and addiction level. However, participants in the multiple tailoring and counseling group reported significantly more annual health care–related costs than participants in the usual care group. Cost-effectiveness analysis, using prolonged abstinence as the outcome measure, showed that the mere multiple computer-tailored program had the highest probability of being cost-effective. Compared with usual care, in this group €5100 had to be paid for each additional abstinent participant. With regard to cost-utility analyses, using quality of life as the outcome measure, usual care was probably most efficient. Conclusions To our knowledge, this was the first study to determine the cost-effectiveness and cost-utility of an Internet-based smoking cessation program with and without counseling by a practice nurse. Although the Internet-based multiple computer-tailored program seemed to be the most cost-effective treatment, the cost-utility was probably highest for care as usual. However, to ease the interpretation of cost-effectiveness results, future research should aim at identifying an acceptable cutoff point for the willingness to pay per abstinent participant. PMID:23491820

  8. Trajectory and chirality of vortex domain walls in ferromagnetic nanowires with an asymmetric Y-branch

    NASA Astrophysics Data System (ADS)

    Brandão, J.; Mello, A.; Garcia, F.; Sampaio, L. C.

    2017-03-01

    The motion and trajectory of vortex domain walls (VDWs) driven by magnetic field were investigated in Fe80Ni20 nanowires with an asymmetric Y-shape branch. By using the focused magneto-optical Kerr effect, we have probed the injection, pinning, and propagation of VDWs in the branch and in the wire beyond the branch entrance. Hysteresis cycles measured at these points show 3 and 4 jumps in the magnetization reversal, respectively. Micromagnetic simulations were carried out to obtain the number of jumps in the hysteresis cycles, and the magnetization process involved in each jump. Based on simulations and from the size of the jumps in the measured hysteresis cycles, one obtains the histogram of the domain wall type probability. While in the branch domain walls of different types are equiprobable, in the nanowire vortex domain walls with counter clockwise and clockwise chiralities and transverse-down domain walls are measured with probabilities of 65%, 25%, and 10%, respectively. These results provide an additional route to select the trajectory and chirality of VDWs in magnetic nanostructures.

  9. Thermodynamics and the structure of quantum theory

    NASA Astrophysics Data System (ADS)

    Krumm, Marius; Barnum, Howard; Barrett, Jonathan; Müller, Markus P.

    2017-04-01

    Despite its enormous empirical success, the formalism of quantum theory still raises fundamental questions: why is nature described in terms of complex Hilbert spaces, and what modifications of it could we reasonably expect to find in some regimes of physics? Here we address these questions by studying how compatibility with thermodynamics constrains the structure of quantum theory. We employ two postulates that any probabilistic theory with reasonable thermodynamic behaviour should arguably satisfy. In the framework of generalised probabilistic theories, we show that these postulates already imply important aspects of quantum theory, like self-duality and analogues of projective measurements, subspaces and eigenvalues. However, they may still admit a class of theories beyond quantum mechanics. Using a thought experiment by von Neumann, we show that these theories admit a consistent thermodynamic notion of entropy, and prove that the second law holds for projective measurements and mixing procedures. Furthermore, we study additional entropy-like quantities based on measurement probabilities and convex decomposition probabilities, and uncover a relation between one of these quantities and Sorkin’s notion of higher-order interference.

  10. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  11. Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation.

    PubMed

    McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias

    2016-05-01

    Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18-68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. © 2016 Associated Professional Sleep Societies, LLC.

  12. Footprint radius of a cosmic-ray neutron probe for measuring soil-water content and its spatiotemporal variability in an alpine meadow ecosystem

    NASA Astrophysics Data System (ADS)

    Zhu, Xuchao; Cao, Ruixue; Shao, Mingan; Liang, Yin

    2018-03-01

    Cosmic-ray neutron probes (CRNPs) have footprint radii for measuring soil-water content (SWC). The theoretical radius is much larger at high altitude, such as the northern Tibetan Plateau, than the radius at sea level. The most probable practical radius of CRNPs for the northern Tibetan Plateau, however, is not known due to the lack of SWC data in this hostile environment. We calculated the theoretical footprint of the CRNP based on a recent simulation and analyzed the practical radius of a CRNP for the northern Tibetan Plateau by measuring SWC at 113 sampling locations on 21 measuring occasions to a depth of 30 cm in a 33.5 ha plot in an alpine meadow at 4600 m a.s.l. The temporal variability and spatial heterogeneity of SWC within the footprint were then analyzed. The theoretical footprint radius was between 360 and 420 m after accounting for the influences of air humidity, soil moisture, vegetation and air pressure. A comparison of SWCs measured by the CRNP and a neutron probe from access tubes in circles with different radii conservatively indicated that the most probable experimental footprint radius was >200 m. SWC within the CRNP footprint was moderately variable over both time and space, but the temporal variability was higher. Spatial heterogeneity was weak, but should be considered in future CRNP calibrations. This study provided theoretical and practical bases for the application and promotion of CRNPs in alpine meadows on the Tibetan Plateau.

  13. Neighborhood social capital is associated with participation in health checks of a general population: a multilevel analysis of a population-based lifestyle intervention- the Inter99 study.

    PubMed

    Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta

    2015-07-22

    Participation in population-based preventive health check has declined over the past decades. More research is needed to determine factors enhancing participation. The objective of this study was to examine the association between two measures of neighborhood level social capital on participation in the health check phase of a population-based lifestyle intervention. The study population comprised 12,568 residents of 73 Danish neighborhoods in the intervention group of a large population-based lifestyle intervention study - the Inter99. Two measures of social capital were applied; informal socializing and voting turnout. In a multilevel analysis only adjusting for age and sex, a higher level of neighborhood social capital was associated with higher probability of participating in the health check. Inclusion of both individual socioeconomic position and neighborhood deprivation in the model attenuated the coefficients for informal socializing, while voting turnout became non-significant. Higher level of neighborhood social capital was associated with higher probability of participating in the health check phase of a population-based lifestyle intervention. Most of the association between neighborhood social capital and participation in preventive health checks can be explained by differences in individual socioeconomic position and level of neighborhood deprivation. Nonetheless, there seems to be some residual association between social capital and health check participation, suggesting that activating social relations in the community may be an avenue for boosting participation rates in population-based health checks. ClinicalTrials.gov (registration no. NCT00289237 ).

  14. Average fidelity between random quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zyczkowski, Karol; Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Aleja Lotnikow 32/44, 02-668 Warsaw; Perimeter Institute, Waterloo, Ontario, N2L 2Y5

    2005-03-01

    We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: the Hilbert-Schmidt measure, the Bures (statistical) measure, the measure induced by the partial trace, and the natural measure on the space of pure states. In certain cases explicit probability distributions for the fidelity are derived. The results obtained may be used to gauge the quality of quantum-information-processing schemes.

  15. Quantum probability and quantum decision-making.

    PubMed

    Yukalov, V I; Sornette, D

    2016-01-13

    A rigorous general definition of quantum probability is given, which is valid not only for elementary events but also for composite events, for operationally testable measurements as well as for inconclusive measurements, and also for non-commuting observables in addition to commutative observables. Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical footing. Conditions are formulated for the case when quantum decision theory reduces to its classical counterpart and for the situation where the use of quantum decision theory is necessary. © 2015 The Author(s).

  16. Quantum epistemology from subquantum ontology: Quantum mechanics from theory of classical random fields

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2017-02-01

    The scientific methodology based on two descriptive levels, ontic (reality as it is) and epistemic (observational), is briefly presented. Following Schrödinger, we point to the possible gap between these two descriptions. Our main aim is to show that, although ontic entities may be unaccessible for observations, they can be useful for clarification of the physical nature of operational epistemic entities. We illustrate this thesis by the concrete example: starting with the concrete ontic model preceding quantum mechanics (the latter is treated as an epistemic model), namely, prequantum classical statistical field theory (PCSFT), we propose the natural physical interpretation for the basic quantum mechanical entity-the quantum state ("wave function"). The correspondence PCSFT ↦ QM is not straightforward, it couples the covariance operators of classical (prequantum) random fields with the quantum density operators. We use this correspondence to clarify the physical meaning of the pure quantum state and the superposition principle-by using the formalism of classical field correlations. In classical mechanics the phase space description can be considered as the ontic description, here states are given by points λ =(x , p) of phase space. The dynamics of the ontic state is given by the system of Hamiltonian equations.We can also consider probability distributions on the phase space (or equivalently random variables valued in it). We call them probabilistic ontic states. Dynamics of probabilistic ontic states is given by the Liouville equation.In classical physics we can (at least in principle) measure both the coordinate and momentum and hence ontic states can be treated as epistemic states as well (or it is better to say that here epistemic states can be treated as ontic states). Probabilistic ontic states represent probabilities for outcomes of joint measurement of position and momentum.However, this was a very special, although very important, example of description of physical phenomena. In general there are no reasons to expect that properties of ontic states are approachable through our measurements. There is a gap between ontic and epistemic descriptions, cf. also with 't Hooft [49,50] and G G. Groessing et al. [51]. In general the presence of such a gap also implies unapproachability of the probabilistic ontic states, i.e., probability distributions on the space of ontic states. De Broglie [28] called such probability distributions hidden probabilities and distinguished them sharply from probability distributions of measurements outcomes, see also Lochak [29]. (The latter distributions are described by the quantum formalism.)This ontic-epistemic approach based on the combination of two descriptive levels for natural phenomena is closely related to the old Bild conception which was originated in the works of Hertz. Later it was heavily explored by Schrödinger in the quantum domain, see, e.g., [8,11] for detailed analysis. According to Hertz one cannot expect to construct a complete theoretical model based explicitly on observable quantities. The complete theoretical model can contain quantities which are unapproachable for external measurement inspection. For example, Hertz by trying to create a mechanical model for Maxwell's electromagnetism invented hidden masses. The main distinguishing property of a theoretical model (in contrast to an observational model) is the continuity of description, i.e., the absence of gaps in description. From this viewpoint, the quantum mechanical description is not continuous: there is a gap between premeasurement dynamics and the measurement outcome. QM cannot say anything what happens in the process of measurement, this is the well known measurement problem of QM [32], cf. [52,53]. Continuity of description is closely related to causality. However, here we cannot go in more detail, see [8,11].The important question is about interrelation between two levels of description, ontic-epistemic (or theoretical-observational). In the introduction we have already cited Schrödinger who emphasized the possible complexity of this interrelation. In particular, in general there is no reason to expect a straightforward coupling of the form, cf. [9,10]:

  17. Floods in Central Texas, September 7-14, 2010

    USGS Publications Warehouse

    Winters, Karl E.

    2012-01-01

    Severe flooding occurred near the Austin metropolitan area in central Texas September 7–14, 2010, because of heavy rainfall associated with Tropical Storm Hermine. The U.S. Geological Survey, in cooperation with the Upper Brushy Creek Water Control and Improvement District, determined rainfall amounts and annual exceedance probabilities for rainfall resulting in flooding in Bell, Williamson, and Travis counties in central Texas during September 2010. We documented peak streamflows and the annual exceedance probabilities for peak streamflows recorded at several streamflow-gaging stations in the study area. The 24-hour rainfall total exceeded 12 inches at some locations, with one report of 14.57 inches at Lake Georgetown. Rainfall probabilities were estimated using previously published depth-duration frequency maps for Texas. At 4 sites in Williamson County, the 24-hour rainfall had an annual exceedance probability of 0.002. Streamflow measurement data and flood-peak data from U.S. Geological Survey surface-water monitoring stations (streamflow and reservoir gaging stations) are presented, along with a comparison of September 2010 flood peaks to previous known maximums in the periods of record. Annual exceedance probabilities for peak streamflow were computed for 20 streamflow-gaging stations based on an analysis of streamflow-gaging station records. The annual exceedance probability was 0.03 for the September 2010 peak streamflow at the Geological Survey's streamflow-gaging stations 08104700 North Fork San Gabriel River near Georgetown, Texas, and 08154700 Bull Creek at Loop 360 near Austin, Texas. The annual exceedance probability was 0.02 for the peak streamflow for Geological Survey's streamflow-gaging station 08104500 Little River near Little River, Texas. The lack of similarity in the annual exceedance probabilities computed for precipitation and streamflow might be attributed to the small areal extent of the heaviest rainfall over these and the other gaged watersheds.

  18. The STOP-Bang Equivalent Model and Prediction of Severity of Obstructive Sleep Apnea: Relation to Polysomnographic Measurements of the Apnea/Hypopnea Index

    PubMed Central

    Farney, Robert J.; Walker, Brandon S.; Farney, Robert M.; Snow, Gregory L.; Walker, James M.

    2011-01-01

    Background: Various models and questionnaires have been developed for screening specific populations for obstructive sleep apnea (OSA) as defined by the apnea/hypopnea index (AHI); however, almost every method is based upon dichotomizing a population, and none function ideally. We evaluated the possibility of using the STOP-Bang model (SBM) to classify severity of OSA into 4 categories ranging from none to severe. Methods: Anthropomorphic data and the presence of snoring, tiredness/sleepiness, observed apneas, and hypertension were collected from 1426 patients who underwent diagnostic polysomnography. Questionnaire data for each patient was converted to the STOP-Bang equivalent with an ordinal rating of 0 to 8. Proportional odds logistic regression analysis was conducted to predict severity of sleep apnea based upon the AHI: none (AHI < 5/h), mild (AHI ≥ 5 to < 15/h), moderate (≥ 15 to < 30/h), and severe (AHI ≥ 30/h). Results: Linear, curvilinear, and weighted models (R2 = 0.245, 0.251, and 0.269, respectively) were developed that predicted AHI severity. The linear model showed a progressive increase in the probability of severe (4.4% to 81.9%) and progressive decrease in the probability of none (52.5% to 1.1%). The probability of mild or moderate OSA initially increased from 32.9% and 10.3% respectively (SBM score 0) to 39.3% (SBM score 2) and 31.8% (SBM score 4), after which there was a progressive decrease in probabilities as more patients fell into the severe category. Conclusions: The STOP-Bang model may be useful to categorize OSA severity, triage patients for diagnostic evaluation or exclude from harm. Citation: Farney RJ; Walker BS; Farney RM; Snow GL; Walker JM. The STOP-Bang equivalent model and prediction of severity of obstructive sleep apnea: relation to polysomnographic measurements of the apnea/hypopnea index. J Clin Sleep Med 2011;7(5):459-465. PMID:22003340

  19. Effect of annealing on the laser induced damage of polished and CO2 laser-processed fused silica surfaces

    NASA Astrophysics Data System (ADS)

    Doualle, T.; Gallais, L.; Cormont, P.; Donval, T.; Lamaignère, L.; Rullier, J. L.

    2016-06-01

    We investigate the effect of different heat treatments on the laser-induced damage probabilities of fused silica samples. Isothermal annealing in a furnace is applied, with different temperatures in the range 700-1100 °C and 12 h annealing time, to super-polished fused silica samples. The surface flatness and laser damage probabilities at 3 ns, 351 nm are measured before and after the different annealing procedures. We have found a significant improvement of the initial laser damage probabilities of the silica surface after annealing at 1050 °C for 12 h. A similar study has been conducted on CO2 laser-processed sites on the surface of the samples. Before and after annealing, we have studied the morphology of the sites, the evolution of residual stress, and the laser-induced damage threshold measured at 351 nm, 3 ns. In this case, we observe that the laser damage resistance of the laser created craters can reach the damage level of the bare fused silica surface after the annealing process, with a complete stress relieve. The obtained results are then compared to the case of local annealing process by CO2 laser irradiation during 1 s, and we found similar improvements in both cases. The different results obtained in the study are compared to numerical simulations made with a thermo-mechanical model based on finite-element method that allows the simulation of the isothermal or the local annealing process, the evolution of stress and fictive temperature. The simulation results were found to be very consistent with experimental observations for the stresses evolution after annealing and estimation of the heat affected area during laser-processing based on the density dependence with fictive temperature. Following this work, the temperature for local annealing should reach 1330-1470 °C for an optimized reduction of damage probability and be below the threshold for material removal, whereas furnace annealing should be kept below the annealing point to avoid sample deformation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doualle, T.; Gallais, L., E-mail: laurent.gallais@fresnel.fr; Cormont, P.

    We investigate the effect of different heat treatments on the laser-induced damage probabilities of fused silica samples. Isothermal annealing in a furnace is applied, with different temperatures in the range 700–1100 °C and 12 h annealing time, to super-polished fused silica samples. The surface flatness and laser damage probabilities at 3 ns, 351 nm are measured before and after the different annealing procedures. We have found a significant improvement of the initial laser damage probabilities of the silica surface after annealing at 1050 °C for 12 h. A similar study has been conducted on CO{sub 2} laser-processed sites on the surface of the samples. Before andmore » after annealing, we have studied the morphology of the sites, the evolution of residual stress, and the laser-induced damage threshold measured at 351 nm, 3 ns. In this case, we observe that the laser damage resistance of the laser created craters can reach the damage level of the bare fused silica surface after the annealing process, with a complete stress relieve. The obtained results are then compared to the case of local annealing process by CO{sub 2} laser irradiation during 1 s, and we found similar improvements in both cases. The different results obtained in the study are compared to numerical simulations made with a thermo-mechanical model based on finite-element method that allows the simulation of the isothermal or the local annealing process, the evolution of stress and fictive temperature. The simulation results were found to be very consistent with experimental observations for the stresses evolution after annealing and estimation of the heat affected area during laser-processing based on the density dependence with fictive temperature. Following this work, the temperature for local annealing should reach 1330–1470 °C for an optimized reduction of damage probability and be below the threshold for material removal, whereas furnace annealing should be kept below the annealing point to avoid sample deformation.« less

  1. How enhanced molecular ions in Cold EI improve compound identification by the NIST library.

    PubMed

    Alon, Tal; Amirav, Aviv

    2015-12-15

    Library-based compound identification with electron ionization (EI) mass spectrometry (MS) is a well-established identification method which provides the names and structures of sample compounds up to the isomer level. The library (such as NIST) search algorithm compares different EI mass spectra in the library's database with the measured EI mass spectrum, assigning each of them a similarity score called 'Match' and an overall identification probability. Cold EI, electron ionization of vibrationally cold molecules in supersonic molecular beams, provides mass spectra with all the standard EI fragment ions combined with enhanced Molecular Ions and high-mass fragments. As a result, Cold EI mass spectra differ from those provided by standard EI and tend to yield lower matching scores. However, in most cases, library identification actually improves with Cold EI, as library identification probabilities for the correct library mass spectra increase, despite the lower matching factors. This research examined the way that enhanced molecular ion abundances affect library identification probability and the way that Cold EI mass spectra, which include enhanced molecular ions and high-mass fragment ions, typically improve library identification results. It involved several computer simulations, which incrementally modified the relative abundances of the various ions and analyzed the resulting mass spectra. The simulation results support previous measurements, showing that while enhanced molecular ion and high-mass fragment ions lower the matching factor of the correct library compound, the matching factors of the incorrect library candidates are lowered even more, resulting in a rise in the identification probability for the correct compound. This behavior which was previously observed by analyzing Cold EI mass spectra can be explained by the fact that high-mass ions, and especially the molecular ion, characterize a compound more than low-mass ions and therefore carries more weight in library search identification algorithms. These ions are uniquely abundant in Cold EI, which therefore enables enhanced compound characterization along with improved NIST library based identification. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  3. Studying the effects of fuel treatment based on burn probability on a boreal forest landscape.

    PubMed

    Liu, Zhihua; Yang, Jian; He, Hong S

    2013-01-30

    Fuel treatment is assumed to be a primary tactic to mitigate intense and damaging wildfires. However, how to place treatment units across a landscape and assess its effectiveness is difficult for landscape-scale fuel management planning. In this study, we used a spatially explicit simulation model (LANDIS) to conduct wildfire risk assessments and optimize the placement of fuel treatments at the landscape scale. We first calculated a baseline burn probability map from empirical data (fuel, topography, weather, and fire ignition and size data) to assess fire risk. We then prioritized landscape-scale fuel treatment based on maps of burn probability and fuel loads (calculated from the interactions among tree composition, stand age, and disturbance history), and compared their effects on reducing fire risk. The burn probability map described the likelihood of burning on a given location; the fuel load map described the probability that a high fuel load will accumulate on a given location. Fuel treatment based on the burn probability map specified that stands with high burn probability be treated first, while fuel treatment based on the fuel load map specified that stands with high fuel loads be treated first. Our results indicated that fuel treatment based on burn probability greatly reduced the burned area and number of fires of different intensities. Fuel treatment based on burn probability also produced more dispersed and smaller high-risk fire patches and therefore can improve efficiency of subsequent fire suppression. The strength of our approach is that more model components (e.g., succession, fuel, and harvest) can be linked into LANDIS to map the spatially explicit wildfire risk and its dynamics to fuel management, vegetation dynamics, and harvesting. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. A statistical method for measuring activation of gene regulatory networks.

    PubMed

    Esteves, Gustavo H; Reis, Luiz F L

    2018-06-13

    Gene expression data analysis is of great importance for modern molecular biology, given our ability to measure the expression profiles of thousands of genes and enabling studies rooted in systems biology. In this work, we propose a simple statistical model for the activation measuring of gene regulatory networks, instead of the traditional gene co-expression networks. We present the mathematical construction of a statistical procedure for testing hypothesis regarding gene regulatory network activation. The real probability distribution for the test statistic is evaluated by a permutation based study. To illustrate the functionality of the proposed methodology, we also present a simple example based on a small hypothetical network and the activation measuring of two KEGG networks, both based on gene expression data collected from gastric and esophageal samples. The two KEGG networks were also analyzed for a public database, available through NCBI-GEO, presented as Supplementary Material. This method was implemented in an R package that is available at the BioConductor project website under the name maigesPack.

  5. How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels

    NASA Astrophysics Data System (ADS)

    Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.

    2016-12-01

    The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.

  6. Quantitative neonatal glucose-6-phosphate dehydrogenase screening: distribution, reference values, and classification by phenotype.

    PubMed

    Algur, Nurit; Avraham, Irit; Hammerman, Cathy; Kaplan, Michael

    2012-08-01

    To determine enzyme assay reference values for newborns in a Sephardic Jewish population at high risk for glucose-6-phosphate dehydrogenase (G6PD) deficiency. Quantitative G6PD testing was performed on umbilical cord blood. The reduction of nicotinamide adenine dinucleotide phosphate to nicotinamide adenine dinucleotide phosphate-oxidase, reflecting G6PD activity, was measured spectrophotometrically. Hemoglobin (Hb) was measured on the same sample. G6PD activity was recorded as U/g Hb. Males (N = 1502) were separated into 2 distinct groups: those <7 U/g Hb (n = 243 [16.2%], median 0.28 U/g Hb), designated G6PD deficient, presumably hemizygotes; and those ≥ 9 U/g Hb (n = 1256 [83.8%], 18.76 U/g Hb), designated G6PD normal, presumably hemizygotes. Female (n = 1298) values were a continuum and were categorized based on the male distribution: those <7 U/g Hb (n = 81 [6.2%], 4.84 U/g Hb), G6PD deficient, probably homozogytes; those ≥ 9.5 U/g Hb, equivalent to 50% of the male normal value, (n = 1153 (88.8%), 18.36 U/g Hb), G6PD normal, probably homozygotes; and those with intermediate values (n = 64 [4.9%], 8.61 U/g Hb), probable heterozygotes. Accurate identification of the male G6PD-deficient state was possible despite high normal neonatal G6PD values. Female values were presented as a continuum preventing accurate classification but were classified based on male phenotype for practical use. Copyright © 2012 Mosby, Inc. All rights reserved.

  7. Probability of cavitation for single ultrasound pulses applied to tissues and tissue-mimicking materials.

    PubMed

    Maxwell, Adam D; Cain, Charles A; Hall, Timothy L; Fowlkes, J Brian; Xu, Zhen

    2013-03-01

    In this study, the negative pressure values at which inertial cavitation consistently occurs in response to a single, two-cycle, focused ultrasound pulse were measured in several media relevant to cavitation-based ultrasound therapy. The pulse was focused into a chamber containing one of the media, which included liquids, tissue-mimicking materials, and ex vivo canine tissue. Focal waveforms were measured by two separate techniques using a fiber-optic hydrophone. Inertial cavitation was identified by high-speed photography in optically transparent media and an acoustic passive cavitation detector. The probability of cavitation (P(cav)) for a single pulse as a function of peak negative pressure (p(-)) followed a sigmoid curve, with the probability approaching one when the pressure amplitude was sufficient. The statistical threshold (defined as P(cav) = 0.5) was between p(-) = 26 and 30 MPa in all samples with high water content but varied between p(-) = 13.7 and >36 MPa in other media. A model for radial cavitation bubble dynamics was employed to evaluate the behavior of cavitation nuclei at these pressure levels. A single bubble nucleus with an inertial cavitation threshold of p(-) = 28.2 megapascals was estimated to have a 2.5 nm radius in distilled water. These data may be valuable for cavitation-based ultrasound therapy to predict the likelihood of cavitation at various pressure levels and dimensions of cavitation-induced lesions in tissue. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  8. Beyond the swab: ecosystem sampling to understand the persistence of an amphibian pathogen.

    PubMed

    Mosher, Brittany A; Huyvaert, Kathryn P; Bailey, Larissa L

    2018-06-02

    Understanding the ecosystem-level persistence of pathogens is essential for predicting and measuring host-pathogen dynamics. However, this process is often masked, in part due to a reliance on host-based pathogen detection methods. The amphibian pathogens Batrachochytrium dendrobatidis (Bd) and B. salamandrivorans (Bsal) are pathogens of global conservation concern. Despite having free-living life stages, little is known about the distribution and persistence of these pathogens outside of their amphibian hosts. We combine historic amphibian monitoring data with contemporary host- and environment-based pathogen detection data to obtain estimates of Bd occurrence independent of amphibian host distributions. We also evaluate differences in filter- and swab-based detection probability and assess inferential differences arising from using different decision criteria used to classify samples as positive or negative. Water filtration-based detection probabilities were lower than those from swabs but were > 10%, and swab-based detection probabilities varied seasonally, declining in the early fall. The decision criterion used to classify samples as positive or negative was important; using a more liberal criterion yielded higher estimates of Bd occurrence than when a conservative criterion was used. Different covariates were important when using the liberal or conservative criterion in modeling Bd detection. We found evidence of long-term Bd persistence for several years after an amphibian host species of conservation concern, the boreal toad (Anaxyrus boreas boreas), was last detected. Our work provides evidence of long-term Bd persistence in the ecosystem, and underscores the importance of environmental samples for understanding and mitigating disease-related threats to amphibian biodiversity.

  9. Quantitative consensus of supervised learners for diffuse lung parenchymal HRCT patterns

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2013-03-01

    Automated lung parenchymal classification usually relies on supervised learning of expert chosen regions representative of the visually differentiable HRCT patterns specific to different pathologies (eg. emphysema, ground glass, honey combing, reticular and normal). Considering the elusiveness of a single most discriminating similarity measure, a plurality of weak learners can be combined to improve the machine learnability. Though a number of quantitative combination strategies exist, their efficacy is data and domain dependent. In this paper, we investigate multiple (N=12) quantitative consensus approaches to combine the clusters obtained with multiple (n=33) probability density-based similarity measures. Our study shows that hypergraph based meta-clustering and probabilistic clustering provides optimal expert-metric agreement.

  10. The contribution of threat probability estimates to reexperiencing symptoms: a prospective analog study.

    PubMed

    Regambal, Marci J; Alden, Lynn E

    2012-09-01

    Individuals with posttraumatic stress disorder (PTSD) are hypothesized to have a "sense of current threat." Perceived threat from the environment (i.e., external threat), can lead to overestimating the probability of the traumatic event reoccurring (Ehlers & Clark, 2000). However, it is unclear if external threat judgments are a pre-existing vulnerability for PTSD or a consequence of trauma exposure. We used trauma analog methodology to prospectively measure probability estimates of a traumatic event, and investigate how these estimates were related to cognitive processes implicated in PTSD development. 151 participants estimated the probability of being in car-accident related situations, watched a movie of a car accident victim, and then completed a measure of data-driven processing during the movie. One week later, participants re-estimated the probabilities, and completed measures of reexperiencing symptoms and symptom appraisals/reactions. Path analysis revealed that higher pre-existing probability estimates predicted greater data-driven processing which was associated with negative appraisals and responses to intrusions. Furthermore, lower pre-existing probability estimates and negative responses to intrusions were both associated with a greater change in probability estimates. Reexperiencing symptoms were predicted by negative responses to intrusions and, to a lesser degree, by greater changes in probability estimates. The undergraduate student sample may not be representative of the general public. The reexperiencing symptoms are less severe than what would be found in a trauma sample. Threat estimates present both a vulnerability and a consequence of exposure to a distressing event. Furthermore, changes in these estimates are associated with cognitive processes implicated in PTSD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Quantifying Treatment Benefit in Molecular Subgroups to Assess a Predictive Biomarker.

    PubMed

    Iasonos, Alexia; Chapman, Paul B; Satagopan, Jaya M

    2016-05-01

    An increased interest has been expressed in finding predictive biomarkers that can guide treatment options for both mutation carriers and noncarriers. The statistical assessment of variation in treatment benefit (TB) according to the biomarker carrier status plays an important role in evaluating predictive biomarkers. For time-to-event endpoints, the hazard ratio (HR) for interaction between treatment and a biomarker from a proportional hazards regression model is commonly used as a measure of variation in TB. Although this can be easily obtained using available statistical software packages, the interpretation of HR is not straightforward. In this article, we propose different summary measures of variation in TB on the scale of survival probabilities for evaluating a predictive biomarker. The proposed summary measures can be easily interpreted as quantifying differential in TB in terms of relative risk or excess absolute risk due to treatment in carriers versus noncarriers. We illustrate the use and interpretation of the proposed measures with data from completed clinical trials. We encourage clinical practitioners to interpret variation in TB in terms of measures based on survival probabilities, particularly in terms of excess absolute risk, as opposed to HR. Clin Cancer Res; 22(9); 2114-20. ©2016 AACR. ©2016 American Association for Cancer Research.

  12. Measurement of Attenuation with Airborne and Ground-Based Radar in Convective Storms Over Land and Its Microphysical Implications

    NASA Technical Reports Server (NTRS)

    Tian, Lin; Heymsfield, G. M.; Srivastava, R. C.; Starr, D. OC. (Technical Monitor)

    2001-01-01

    Observations by the airborne X-band Doppler radar (EDOP) and the NCAR S-band polarimetric (S-POL) radar from two field experiments are used to evaluate the Surface ref'ercnce technique (SRT) for measuring the path integrated attenuation (PIA) and to study attenuation in deep convective storms. The EDOP, flying at an altitude of 20 km, uses a nadir beam and a forward pointing beam. It is found that over land, the surface scattering cross-section is highly variable at nadir incidence but relatively stable at forward incidence. It is concluded that measurement by the forward beam provides a viable technique for measuring PIA using the SRT. Vertical profiles of peak attenuation coefficient are derived in vxo deep convective storms by the dual-wavelength method. Using the measured Doppler velocity, the reflectivities at. the two wavelengths, the differential reflectivity and the estimated attenuation coefficients, it is shown that: supercooled drops and dry ice particles probably co-existed above the melting level in regions of updraft, that water-coated partially melted ice particles probably contributed to high attenuation below the melting level, and that the data are not readil explained in terms of a gamma function raindrop size distribution.

  13. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    DOEpatents

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  14. Survival and breeding advantages of larger Black Brant (Branta bernicla nigricans) goslings: Within- and among-cohort variation

    USGS Publications Warehouse

    Sedinger, J.S.; Chelgren, N.D.

    2007-01-01

    We examined the relationship between mass late in the first summer and survival and return to the natal breeding colony for 12 cohorts (1986-1997) of female Black Brant (Branta bernicla nigricans). We used Cormack-Jolly-Seber methods and the program MARK to analyze capture-recapture data. Models included two kinds of residuals from regressions of mass on days after peak of hatch when goslings were measured; one based on the entire sample (12 cohorts) and the other based only on individuals in the same cohort. Some models contained date of peak of hatch (a group covariate related to lateness of nesting in that year) and mean cohort residual mass. Finally, models allowed survival to vary among cohorts. The best model of encounter probability included an effect of residual mass on encounter probability and allowed encounter probability to vary among age classes and across years. All competitive models contained an effect of one of the estimates of residual mass; relatively larger goslings survived their first year at higher rates. Goslings in cohorts from later years in the analysis tended to have lower first-year survival, after controlling for residual mass, which reflected the generally smaller mean masses for these cohorts but was potentially also a result of population-density effects additional to those on growth. Variation among cohorts in mean mass accounted for 56% of variation among cohorts in first-year survival. Encounter probabilities, which were correlated with breeding probability, increased with relative mass, which suggests that larger goslings not only survived at higher rates but also bred at higher rates. Although our findings support the well-established linkage between gosling mass and fitness, they suggest that additional environmental factors also influence first-year survival.

  15. Probability-based nitrate contamination map of groundwater in Kinmen.

    PubMed

    Liu, Chen-Wuing; Wang, Yeuh-Bin; Jang, Cheng-Shin

    2013-12-01

    Groundwater supplies over 50% of drinking water in Kinmen. Approximately 16.8% of groundwater samples in Kinmen exceed the drinking water quality standard (DWQS) of NO3 (-)-N (10 mg/L). The residents drinking high nitrate-polluted groundwater pose a potential risk to health. To formulate effective water quality management plan and assure a safe drinking water in Kinmen, the detailed spatial distribution of nitrate-N in groundwater is a prerequisite. The aim of this study is to develop an efficient scheme for evaluating spatial distribution of nitrate-N in residential well water using logistic regression (LR) model. A probability-based nitrate-N contamination map in Kinmen is constructed. The LR model predicted the binary occurrence probability of groundwater nitrate-N concentrations exceeding DWQS by simple measurement variables as independent variables, including sampling season, soil type, water table depth, pH, EC, DO, and Eh. The analyzed results reveal that three statistically significant explanatory variables, soil type, pH, and EC, are selected for the forward stepwise LR analysis. The total ratio of correct classification reaches 92.7%. The highest probability of nitrate-N contamination map presents in the central zone, indicating that groundwater in the central zone should not be used for drinking purposes. Furthermore, a handy EC-pH-probability curve of nitrate-N exceeding the threshold of DWQS was developed. This curve can be used for preliminary screening of nitrate-N contamination in Kinmen groundwater. This study recommended that the local agency should implement the best management practice strategies to control nonpoint nitrogen sources and carry out a systematic monitoring of groundwater quality in residential wells of the high nitrate-N contamination zones.

  16. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.

  17. Measurements of gas hydrate formation probability distributions on a quasi-free water droplet

    NASA Astrophysics Data System (ADS)

    Maeda, Nobuo

    2014-06-01

    A High Pressure Automated Lag Time Apparatus (HP-ALTA) can measure gas hydrate formation probability distributions from water in a glass sample cell. In an HP-ALTA gas hydrate formation originates near the edges of the sample cell and gas hydrate films subsequently grow across the water-guest gas interface. It would ideally be desirable to be able to measure gas hydrate formation probability distributions of a single water droplet or mist that is freely levitating in a guest gas, but this is technically challenging. The next best option is to let a water droplet sit on top of a denser, immiscible, inert, and wall-wetting hydrophobic liquid to avoid contact of a water droplet with the solid walls. Here we report the development of a second generation HP-ALTA which can measure gas hydrate formation probability distributions of a water droplet which sits on a perfluorocarbon oil in a container that is coated with 1H,1H,2H,2H-Perfluorodecyltriethoxysilane. It was found that the gas hydrate formation probability distributions of such a quasi-free water droplet were significantly lower than those of water in a glass sample cell.

  18. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  19. Stimulus discriminability may bias value-based probabilistic learning.

    PubMed

    Schutte, Iris; Slagter, Heleen A; Collins, Anne G E; Frank, Michael J; Kenemans, J Leon

    2017-01-01

    Reinforcement learning tasks are often used to assess participants' tendency to learn more from the positive or more from the negative consequences of one's action. However, this assessment often requires comparison in learning performance across different task conditions, which may differ in the relative salience or discriminability of the stimuli associated with more and less rewarding outcomes, respectively. To address this issue, in a first set of studies, participants were subjected to two versions of a common probabilistic learning task. The two versions differed with respect to the stimulus (Hiragana) characters associated with reward probability. The assignment of character to reward probability was fixed within version but reversed between versions. We found that performance was highly influenced by task version, which could be explained by the relative perceptual discriminability of characters assigned to high or low reward probabilities, as assessed by a separate discrimination experiment. Participants were more reliable in selecting rewarding characters that were more discriminable, leading to differences in learning curves and their sensitivity to reward probability. This difference in experienced reinforcement history was accompanied by performance biases in a test phase assessing ability to learn from positive vs. negative outcomes. In a subsequent large-scale web-based experiment, this impact of task version on learning and test measures was replicated and extended. Collectively, these findings imply a key role for perceptual factors in guiding reward learning and underscore the need to control stimulus discriminability when making inferences about individual differences in reinforcement learning.

  20. Modulations of stratospheric ozone by volcanic eruptions

    NASA Technical Reports Server (NTRS)

    Blanchette, Christian; Mcconnell, John C.

    1994-01-01

    We have used a time series of aerosol surface based on the measurements of Hofmann to investigate the modulation of total column ozone caused by the perturbation to gas phase chemistry by the reaction N2O5(gas) + H2O(aero) yields 2HNO3(gas) on the surface of stratospheric aerosols. We have tested a range of values for its reaction probability, gamma = 0.02, 0.13, and 0.26 which we compared to unperturbed homogeneous chemistry. Our analysis spans a period from Jan. 1974 to Oct. 1994. The results suggest that if lower values of gamma are the norm then we would expect larger ozone losses for highly enhanced aerosol content that for larger values of gamma. The ozone layer is more sensitive to the magnitude of the reaction probability under background conditions than during volcanically active periods. For most conditions, the conversion of NO2 to HNO3 is saturated for reaction probability in the range of laboratory measurements, but is only absolutely saturated following major volcanic eruptions when the heterogeneous loss dominates the losses of N2O5. The ozone loss due to this heterogeneous reaction increases with the increasing chlorine load. Total ozone losses calculated are comparable to ozone losses reported from TOMS and Dobson data.

  1. Failure probability analysis of optical grid

    NASA Astrophysics Data System (ADS)

    Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng

    2008-11-01

    Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.

  2. Probability-based hazard avoidance guidance for planetary landing

    NASA Astrophysics Data System (ADS)

    Yuan, Xu; Yu, Zhengshi; Cui, Pingyuan; Xu, Rui; Zhu, Shengying; Cao, Menglong; Luan, Enjie

    2018-03-01

    Future landing and sample return missions on planets and small bodies will seek landing sites with high scientific value, which may be located in hazardous terrains. Autonomous landing in such hazardous terrains and highly uncertain planetary environments is particularly challenging. Onboard hazard avoidance ability is indispensable, and the algorithms must be robust to uncertainties. In this paper, a novel probability-based hazard avoidance guidance method is developed for landing in hazardous terrains on planets or small bodies. By regarding the lander state as probabilistic, the proposed guidance algorithm exploits information on the uncertainty of lander position and calculates the probability of collision with each hazard. The collision probability serves as an accurate safety index, which quantifies the impact of uncertainties on the lander safety. Based on the collision probability evaluation, the state uncertainty of the lander is explicitly taken into account in the derivation of the hazard avoidance guidance law, which contributes to enhancing the robustness to the uncertain dynamics of planetary landing. The proposed probability-based method derives fully analytic expressions and does not require off-line trajectory generation. Therefore, it is appropriate for real-time implementation. The performance of the probability-based guidance law is investigated via a set of simulations, and the effectiveness and robustness under uncertainties are demonstrated.

  3. Trapping dynamics of xenon on Pt(111)

    NASA Astrophysics Data System (ADS)

    Arumainayagam, Christopher R.; Madix, Robert J.; Mcmaster, Mark C.; Suzawa, Valerie M.; Tully, John C.

    1990-02-01

    The dynamics of Xe trapping on Pt(111) was studied using supersonic atomic beam techniques. Initial trapping probabilities ( S0) were measured directly as a function of incident translational energy ( EinT) and angle of incidence (θ i) at a surface temperature ( Tins) 95 K. The initial trapping probability decreases smoothly with increasing ET cosθ i;, rather than ET cos 2θ i, suggesting participation of parallel momentum in the trapping process. Accordingly, the measured initial trapping probability falls off more slowly with increasing incident translational energy than predicted by one-dimensional theories. This finding is in near agreement with previous mean translational energy measurements for Xe desorbing near the Pt(111) surface normal, assuming detailed balance applies. Three-dimensional stochastic classical trajectory calculations presented herein also exhibit the importance of tangential momentum in trapping and satisfactorily reproduce the experimental initial trapping probabilities.

  4. Ozone-surface interactions: Investigations of mechanisms, kinetics, mass transport, and implications for indoor air quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrison, Glenn Charles

    1999-12-01

    In this dissertation, results are presented of laboratory investigations and mathematical modeling efforts designed to better understand the interactions of ozone with surfaces. In the laboratory, carpet and duct materials were exposed to ozone and measured ozone uptake kinetics and the ozone induced emissions of volatile organic compounds. To understand the results of the experiments, mathematical methods were developed to describe dynamic indoor aldehyde concentrations, mass transport of reactive species to smooth surfaces, the equivalent reaction probability of whole carpet due to the surface reactivity of fibers and carpet backing, and ozone aging of surfaces. Carpets, separated carpet fibers, andmore » separated carpet backing all tended to release aldehydes when exposed to ozone. Secondary emissions were mostly n-nonanal and several other smaller aldehydes. The pattern of emissions suggested that vegetable oils may be precursors for these oxidized emissions. Several possible precursors and experiments in which linseed and tung oils were tested for their secondary emission potential were discussed. Dynamic emission rates of 2-nonenal from a residential carpet may indicate that intermediate species in the oxidation of conjugated olefins can significantly delay aldehyde emissions and act as reservoir for these compounds. The ozone induced emission rate of 2-nonenal, a very odorous compound, can result in odorous indoor concentrations for several years. Surface ozone reactivity is a key parameter in determining the flux of ozone to a surface, is parameterized by the reaction probability, which is simply the probability that an ozone molecule will be irreversibly consumed when it strikes a surface. In laboratory studies of two residential and two commercial carpets, the ozone reaction probability for carpet fibers, carpet backing and the equivalent reaction probability for whole carpet were determined. Typically reaction probability values for these materials were 10 -7, 10 -5, and 10 -5 respectively. To understand how internal surface area influences the equivalent reaction probability of whole carpet, a model of ozone diffusion into and reaction with internal carpet components was developed. This was then used to predict apparent reaction probabilities for carpet. He combines this with a modified model of turbulent mass transfer developed by Liu, et al. to predict deposition rates and indoor ozone concentrations. The model predicts that carpet should have an equivalent reaction probability of about 10 -5, matching laboratory measurements of the reaction probability. For both carpet and duct materials, surfaces become progressively quenched (aging), losing the ability to react or otherwise take up ozone. He evaluated the functional form of aging and find that the reaction probability follows a power function with respect to the cumulative uptake of ozone. To understand ozone aging of surfaces, he developed several mathematical descriptions of aging based on two different mechanisms. The observed functional form of aging is mimicked by a model which describes ozone diffusion with internal reaction in a solid. He shows that the fleecy nature of carpet materials in combination with the model of ozone diffusion below a fiber surface and internal reaction may explain the functional form and the magnitude of power function parameters observed due to ozone interactions with carpet. The ozone induced aldehyde emissions, measured from duct materials, were combined with an indoor air quality model to show that concentrations of aldehydes indoors may approach odorous levels. He shows that ducts are unlikely to be a significant sink for ozone due to the low reaction probability in combination with the short residence time of air in ducts.« less

  5. Determining Risk of Falls in Community Dwelling Older Adults: A Systematic Review and Meta-analysis Using Posttest Probability.

    PubMed

    Lusardi, Michelle M; Fritz, Stacy; Middleton, Addie; Allison, Leslie; Wingood, Mariana; Phillips, Emma; Criss, Michelle; Verma, Sangita; Osborne, Jackie; Chui, Kevin K

    Falls and their consequences are significant concerns for older adults, caregivers, and health care providers. Identification of fall risk is crucial for appropriate referral to preventive interventions. Falls are multifactorial; no single measure is an accurate diagnostic tool. There is limited information on which history question, self-report measure, or performance-based measure, or combination of measures, best predicts future falls. First, to evaluate the predictive ability of history questions, self-report measures, and performance-based measures for assessing fall risk of community-dwelling older adults by calculating and comparing posttest probability (PoTP) values for individual test/measures. Second, to evaluate usefulness of cumulative PoTP for measures in combination. To be included, a study must have used fall status as an outcome or classification variable, have a sample size of at least 30 ambulatory community-living older adults (≥65 years), and track falls occurrence for a minimum of 6 months. Studies in acute or long-term care settings, as well as those including participants with significant cognitive or neuromuscular conditions related to increased fall risk, were excluded. Searches of Medline/PubMED and Cumulative Index of Nursing and Allied Health (CINAHL) from January 1990 through September 2013 identified 2294 abstracts concerned with fall risk assessment in community-dwelling older adults. Because the number of prospective studies of fall risk assessment was limited, retrospective studies that classified participants (faller/nonfallers) were also included. Ninety-five full-text articles met inclusion criteria; 59 contained necessary data for calculation of PoTP. The Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) was used to assess each study's methodological quality. Study design and QUADAS score determined the level of evidence. Data for calculation of sensitivity (Sn), specificity (Sp), likelihood ratios (LR), and PoTP values were available for 21 of 46 measures used as search terms. An additional 73 history questions, self-report measures, and performance-based measures were used in included articles; PoTP values could be calculated for 35. Evidence tables including PoTP values were constructed for 15 history questions, 15 self-report measures, and 26 performance-based measures. Recommendations for clinical practice were based on consensus. Variations in study quality, procedures, and statistical analyses challenged data extraction, interpretation, and synthesis. There was insufficient data for calculation of PoTP values for 63 of 119 tests. No single test/measure demonstrated strong PoTP values. Five history questions, 2 self-report measures, and 5 performance-based measures may have clinical usefulness in assessing risk of falling on the basis of cumulative PoTP. Berg Balance Scale score (≤50 points), Timed Up and Go times (≥12 seconds), and 5 times sit-to-stand times (≥12) seconds are currently the most evidence-supported functional measures to determine individual risk of future falls. Shortfalls identified during review will direct researchers to address knowledge gaps.

  6. Magnetic and electrical properties of Martian particles

    NASA Technical Reports Server (NTRS)

    Olhoeft, G. R.

    1991-01-01

    The only determinations of the magnetic properties of Martian materials come from experiments on the two Viking Landers. The results suggest Martian soil containing 1 to 10 percent of a highly magnetic phase. Though the magnetic phase mineral was not conclusively identified, the predominate interpretation is that the magnetic phase is probably maghemite. The electrical properties of the surface of Mars were only measured remotely by observations with Earth based radar, microwave radiometry, and inference from radio-occultation of Mars orbiting spacecraft. No direct measurements of electrical properties on Martian materials have been performed.

  7. Maximal violation of the Clauser-Horne-Shimony-Holt inequality for two qutrits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Li-Bin; Zhao, Xian-Geng; Chen, Jing-Ling

    2003-08-01

    The Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequality (in terms of correlation functions) of two qutrits is studied in detail by employing tritter measurements. A uniform formula for the maximum value of this inequality for tritter measurements is obtained. Based on this formula, we show that nonmaximally entangled states violate the BCHSH inequality more strongly than the maximally entangled one. This result is consistent with what was obtained by Acin et al. [Phys. Rev. A 65, 052325 (2002)] using the Bell-Clauser-Horne inequality (in terms of probabilities)

  8. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems (Developpement d’Interfaces Homme-Machine Pour Appuyer la Confiance dans les Systemes Automatises d’Identification au Combat)

    DTIC Science & Technology

    2008-03-31

    on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study

  9. [Reducing the burden of disease caused by alcohol use in Peru: evidence- based approaches].

    PubMed

    Fiestas, Fabián

    2012-03-01

    Alcohol use is one the most important risk factors for illness and early death in Peru. Measures aimed at decreasing or controlling the great impact caused by alcohol in the Peruvian society are urgently needed. This article identifies and promotes the implementation of public health measures supported by sound scientific evidence of effectiveness or, in some cases, cost-effectiveness. The 10 evidence-based public health measures identified and described here represent a set if measures with high probability of success if implemented, as they are supported by scientific evidence. We recommend that governments, at the national or local levels, apply these measures not individually, but in combination, arranging them into a plan or roadmap, where the framework in which they will be applied must be established according to each context. Considering the available resources, some of these measures could be implemented in the short and medium term while the others can be set in the long-term.

  10. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  11. Assessment of local variability by high-throughput e-beam metrology for prediction of patterning defect probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Fuming; Hunsche, Stefan; Anunciado, Roy; Corradi, Antonio; Tien, Hung Yu; Tang, Peng; Wei, Junwei; Wang, Yongjun; Fang, Wei; Wong, Patrick; van Oosten, Anton; van Ingen Schenau, Koen; Slachter, Bram

    2018-03-01

    We present an experimental study of pattern variability and defectivity, based on a large data set with more than 112 million SEM measurements from an HMI high-throughput e-beam tool. The test case is a 10nm node SRAM via array patterned with a DUV immersion LELE process, where we see a variation in mean size and litho sensitivities between different unique via patterns that leads to a seemingly qualitative differences in defectivity. The large available data volume enables further analysis to reliably distinguish global and local CDU variations, including a breakdown into local systematics and stochastics. A closer inspection of the tail end of the distributions and estimation of defect probabilities concludes that there is a common defect mechanism and defect threshold despite the observed differences of specific pattern characteristics. We expect that the analysis methodology can be applied for defect probability modeling as well as general process qualification in the future.

  12. Multi-Agent Cooperative Target Search

    PubMed Central

    Hu, Jinwen; Xie, Lihua; Xu, Jun; Xu, Zhao

    2014-01-01

    This paper addresses a vision-based cooperative search for multiple mobile ground targets by a group of unmanned aerial vehicles (UAVs) with limited sensing and communication capabilities. The airborne camera on each UAV has a limited field of view and its target discriminability varies as a function of altitude. First, by dividing the whole surveillance region into cells, a probability map can be formed for each UAV indicating the probability of target existence within each cell. Then, we propose a distributed probability map updating model which includes the fusion of measurement information, information sharing among neighboring agents, information decay and transmission due to environmental changes such as the target movement. Furthermore, we formulate the target search problem as a multi-agent cooperative coverage control problem by optimizing the collective coverage area and the detection performance. The proposed map updating model and the cooperative control scheme are distributed, i.e., assuming that each agent only communicates with its neighbors within its communication range. Finally, the effectiveness of the proposed algorithms is illustrated by simulation. PMID:24865884

  13. Infinite capacity multi-server queue with second optional service channel

    NASA Astrophysics Data System (ADS)

    Ke, Jau-Chuan; Wu, Chia-Huang; Pearn, Wen Lea

    2013-02-01

    This paper deals with an infinite-capacity multi-server queueing system with a second optional service (SOS) channel. The inter-arrival times of arriving customers, the service times of the first essential service (FES) and the SOS channel are all exponentially distributed. A customer may leave the system after the FES channel with probability (1-θ), or at the completion of the FES may immediately require a SOS with probability θ (0 <= θ <= 1). The formulae for computing the rate matrix and stationary probabilities are derived by means of a matrix analytical approach. A cost model is developed to determine the optimal values of the number of servers and the two service rates, simultaneously, at the minimal total expected cost per unit time. Quasi-Newton method are employed to deal with the optimization problem. Under optimal operating conditions, numerical results are provided in which several system performance measures are calculated based on assumed numerical values of the system parameters.

  14. The New Challenges of China's South to North Water Diversion Project

    NASA Astrophysics Data System (ADS)

    Liu, X.

    2017-12-01

    Water shortage has restricted the economic and social development of Beijing during recent years. The central route of China's South to North Water Diversion Project is planned to divert water from the Danjiangkou Reservoir to Beijing. Currently, the main local surface water source for Beijing is the Miyun Reservoir. We found that annual runoffs in both of the Danjiangkou Reservoir Basin and Miyun Reservoir Basin decreased significantly from 1956 to 2015. The decrease in runoff represents a decrease in available water resources. We classified each year between 1956 and 2015 as either a wet, normal or dry year based on the Pearson-III probability distribution of annual runoff. The probability of a simultaneous dry year in the two basins was about 8.8% during 1956 to 1989, while it increased to 33.7% during 1990 to 2015. The increase in probability of a simultaneous dry year could threaten the success of the water diversion project. We suggest that urgent adaptive measures are implemented in advance to face this challenge.

  15. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    PubMed

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  16. Photoexcited escape probability, optical gain, and noise in quantum well infrared photodetectors

    NASA Technical Reports Server (NTRS)

    Levine, B. F.; Zussman, A.; Gunapala, S. D.; Asom, M. T.; Kuo, J. M.; Hobson, W. S.

    1992-01-01

    We present a detailed and thorough study of a wide variety of quantum well infrared photodetectors (QWIPs), which were chosen to have large differences in their optical and transport properties. Both n- and p-doped QWIPs, as well as intersubband transitions based on photoexcitation from bound-to-bound, bound-to-quasi-continuum, and bound-to-continuum quantum well states were investigated. The measurements and theoretical analysis included optical absorption, responsivity, dark current, current noise, optical gain, hot carrier mean free path; net quantum efficiency, quantum well escape probability, quantum well escape time, as well as detectivity. These results allow a better understanding of the optical and transport physics and thus a better optimization of the QWIP performance.

  17. A stochastic model for the normal tissue complication probability (NTCP) and applicationss.

    PubMed

    Stocks, Theresa; Hillen, Thomas; Gong, Jiafen; Burger, Martin

    2017-12-11

    The normal tissue complication probability (NTCP) is a measure for the estimated side effects of a given radiation treatment schedule. Here we use a stochastic logistic birth-death process to define an organ-specific and patient-specific NTCP. We emphasize an asymptotic simplification which relates the NTCP to the solution of a logistic differential equation. This framework is based on simple modelling assumptions and it prepares a framework for the use of the NTCP model in clinical practice. As example, we consider side effects of prostate cancer brachytherapy such as increase in urinal frequency, urinal retention and acute rectal dysfunction. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  18. Gaussian Boson Sampling.

    PubMed

    Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor

    2017-10-27

    Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.

  19. Experimental transition probabilities for Mn II spectral lines

    NASA Astrophysics Data System (ADS)

    Manrique, J.; Aguilera, J. A.; Aragón, C.

    2018-06-01

    Transition probabilities for 46 spectral lines of Mn II with wavelengths in the range 2000-3500 Å have been measured by CSigma laser-induced breakdown spectroscopy (Cσ-LIBS). For 28 of the lines, experimental data had not been reported previously. The Cσ-LIBS method, based in the construction of generalized curves of growth called Cσ graphs, avoids the error due to self-absorption. The samples used to generate the laser-induced plasmas are fused glass disks prepared from pure MnO. The Mn concentrations in the samples and the lines included in the study are selected to ensure the validity of the model of homogeneous plasma used. The results are compared to experimental and theoretical values available in the literature.

  20. The extension of the thermal-vacuum test optimization program to multiple flights

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Byrd, J.

    1981-01-01

    The thermal vacuum test optimization model developed to provide an approach to the optimization of a test program based on prediction of flight performance with a single flight option in mind is extended to consider reflight as in space shuttle missions. The concept of 'utility', developed under the name of 'availability', is used to follow performance through the various options encountered when the capabilities of reflight and retrievability of space shuttle are available. Also, a 'lost value' model is modified to produce a measure of the probability of a mission's success, achieving a desired utility using a minimal cost test strategy. The resulting matrix of probabilities and their associated costs provides a means for project management to evaluate various test and reflight strategies.

  1. Effects of combined-sewer overflows and urban runoff on the water quality of Fall Creek, Indianapolis, Indiana

    USGS Publications Warehouse

    Martin, Jeffrey D.

    1995-01-01

    Concentrations of dissolved oxygen measured at the station in the middle of the combined-sewer overflows were less than the Indiana minimum ambient water-quality standard of 4.0 milligrams per liter during all storms. Concentrations of ammonia, oxygen demand, copper, lead, zinc, and fecal coliform bacteria at the stations downstream from the combined-sewer overflows were much higher in storm runoff than in base flow. Increased concentrations of oxygen demand in runoff probably were caused by combined-sewer overflows, urban runoff, and the resuspension of organic material deposited on the streambed. Some of the increased concentrations of lead, zinc, and probably copper can be attributed to the discharge and resuspension of filter backwash

  2. Implementation of projective measurements with linear optics and continuous photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeoka, Masahiro; Sasaki, Masahide; Loock, Peter van

    2005-02-01

    We investigate the possibility of implementing a given projection measurement using linear optics and arbitrarily fast feedforward based on the continuous detection of photons. In particular, we systematically derive the so-called Dolinar scheme that achieves the minimum-error discrimination of binary coherent states. Moreover, we show that the Dolinar-type approach can also be applied to projection measurements in the regime of photonic-qubit signals. Our results demonstrate that for implementing a projection measurement with linear optics, in principle, unit success probability may be approached even without the use of expensive entangled auxiliary states, as they are needed in all known (near-)deterministic linear-opticsmore » proposals.« less

  3. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  4. Electrical system for measurement of breakdown voltage of vacuum and gas-filled tubes using a dynamic method

    NASA Astrophysics Data System (ADS)

    Pejović, Milić M.; Milosavljević, Čedomir S.; Pejović, Momčilo M.

    2003-06-01

    This article describes an electrical system aimed at measuring and data acquisition of breakdown voltages of vacuum and gas-filled tubes. The measurements were performed using a nitrogen-filled tube at 4 mbar pressure. Based on the measured breakdown voltage data as a function of the applied voltage increase rate, a static breakdown voltage is estimated for the applied voltage gradient ranging from 0.1 to 1 V s-1 and from 1 to 10 V s-1. The histograms of breakdown voltages versus applied voltage increase rates from 0.1 and 0.5 V s-1 are approximated by the probability density functions using a fitting procedure.

  5. Predicting the Earth encounters of (99942) Apophis

    NASA Technical Reports Server (NTRS)

    Giorgini, Jon D.; Benner, Lance A. M.; Ostro, Steven J.; Nolan, Michael C.; Busch, Michael W.

    2007-01-01

    Arecibo delay-Doppler measurements of (99942) Apophis in 2005 and 2006 resulted in a five standard-deviation trajectory correction to the optically predicted close approach distance to Earth in 2029. The radar measurements reduced the volume of the statistical uncertainty region entering the encounter to 7.3% of the pre-radar solution, but increased the trajectory uncertainty growth rate across the encounter by 800% due to the closer predicted approach to the Earth. A small estimated Earth impact probability remained for 2036. With standard-deviation plane-of-sky position uncertainties for 2007-2010 already less than 0.2 arcsec, the best near-term ground-based optical astrometry can only weakly affect the trajectory estimate. While the potential for impact in 2036 will likely be excluded in 2013 (if not 2011) using ground-based optical measurements, approximations within the Standard Dynamical Model (SDM) used to estimate and predict the trajectory from the current era are sufficient to obscure the difference between a predicted impact and a miss in 2036 by altering the dynamics leading into the 2029 encounter. Normal impact probability assessments based on the SDM become problematic without knowledge of the object's physical properties; impact could be excluded while the actual dynamics still permit it. Calibrated position uncertainty intervals are developed to compensate for this by characterizing the minimum and maximum effect of physical parameters on the trajectory. Uncertainty in accelerations related to solar radiation can cause between 82 and 4720 Earth-radii of trajectory change relative to the SDM by 2036. If an actionable hazard exists, alteration by 2-10% of Apophis' total absorption of solar radiation in 2018 could be sufficient to produce a six standard-deviation trajectory change by 2036 given physical characterization; even a 0.5% change could produce a trajectory shift of one Earth-radius by 2036 for all possible spin-poles and likely masses. Planetary ephemeris uncertainties are the next greatest source of systematic error, causing up to 23 Earth-radii of uncertainty. The SDM Earth point-mass assumption introduces an additional 2.9 Earth-radii of prediction error by 2036. Unmodeled asteroid perturbations produce as much as 2.3 Earth-radii of error. We find no future small-body encounters likely to yield an Apophis mass determination prior to 2029. However, asteroid (144898) 2004 VD17, itself having a statistical Earth impact in 2102, will probably encounter Apophis at 6.7 lunar distances in 2034, their uncertainty regions coming as close as 1.6 lunar distances near the center of both SDM probability distributions.

  6. Betting on the outcomes of measurements: a Bayesian theory of quantum probability

    NASA Astrophysics Data System (ADS)

    Pitowsky, Itamar

    We develop a systematic approach to quantum probability as a theory of rational betting in quantum gambles. In these games of chance, the agent is betting in advance on the outcomes of several (finitely many) incompatible measurements. One of the measurements is subsequently chosen and performed and the money placed on the other measurements is returned to the agent. We show how the rules of rational betting imply all the interesting features of quantum probability, even in such finite gambles. These include the uncertainty principle and the violation of Bell's inequality among others. Quantum gambles are closely related to quantum logic and provide a new semantics for it. We conclude with a philosophical discussion on the interpretation of quantum mechanics.

  7. Prediction Metrics for Chemical Detection in Long-Wave Infrared Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chilton, Marie C.; Walsh, Stephen J.; Daly, Don S.

    2009-01-29

    A natural or anthropogenic process often generates a signature gas plume whose chemical constituents may be identified using hyperspectral imagery. A hyperspectral image is a pixel-indexed set of spectra where each spectrum reflects the chemical constituents of the plume, the atmosphere, the bounding background surface, and instrument noise. This study explored the relationship between gas absorbance and background emissivity across the long-wave infrared (LWIR) spectrum and how they affect relative gas detection sensitivity. The physics-based model for the observed radiance shows that high gas absorbance coupled with low background emissivity at a single wavenumber results in a stronger recorded radiance.more » Two sensitivity measures were developed to predict relative probability of detection using chemical absorbance and background emissivity: one focused on a single wavenumber while another accounted for the entire spectrum. The predictive abilities of these measures were compared to synthetic image analysis. This study simulated images with 499 distinct gases at each of 6 concentrations over 6 different background surfaces with the atmosphere and level of instrument noise held constant. The Whitened Matched Filter was used to define gas detection from an image spectrum. The estimate of a chemical’s probability of detection at a given concentration over a specific background was the proportion of detections in 500 trials. Of the 499 chemicals used in the images, 276 had estimated probabilities of detection below 0.2 across all backgrounds and concentrations; these chemicals were removed from the study. For 92.8 percent of the remaining chemicals, the single channel measure correctly predicted the background over which the chemical had the largest relative probability of detection. Further, the measure which accounted for information across all wavenumbers predicted the background over which the chemical had the largest relative probability of detection for 93.3 percent of the chemicals. These results suggest that the wavenumber with largest gas absorbance has the most influence over gas detection for this data. By furthering the in-silico experimentation with higher concentrations of gases not detectable in this experiment or by standardizing the gas absorbance spectra to unit vectors, these conclusions may be confirmed and generalized to more gases. This will help simplify image acquisition planning and the identification of unknowns in field collected images.« less

  8. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

  9. Experimental investigation of the intensity fluctuation joint probability and conditional distributions of the twin-beam quantum state.

    PubMed

    Zhang, Yun; Kasai, Katsuyuki; Watanabe, Masayoshi

    2003-01-13

    We give the intensity fluctuation joint probability of the twin-beam quantum state, which was generated with an optical parametric oscillator operating above threshold. Then we present what to our knowledge is the first measurement of the intensity fluctuation conditional probability distributions of twin beams. The measured inference variance of twin beams 0.62+/-0.02, which is less than the standard quantum limit of unity, indicates inference with a precision better than that of separable states. The measured photocurrent variance exhibits a quantum correlation of as much as -4.9+/-0.2 dB between the signal and the idler.

  10. Optimum quantum receiver for detecting weak signals in PAM communication systems

    NASA Astrophysics Data System (ADS)

    Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar

    2017-09-01

    This paper deals with the modeling of an optimum quantum receiver for pulse amplitude modulator (PAM) communication systems. The information bearing sequence {I_k}_{k=0}^{N-1} is estimated using the maximum likelihood (ML) method. The ML method is based on quantum mechanical measurements of an observable X in the Hilbert space of the quantum system at discrete times, when the Hamiltonian of the system is perturbed by an operator obtained by modulating a potential V with a PAM signal derived from the information bearing sequence {I_k}_{k=0}^{N-1}. The measurement process at each time instant causes collapse of the system state to an observable eigenstate. All probabilities of getting different outcomes from an observable are calculated using the perturbed evolution operator combined with the collapse postulate. For given probability densities, calculation of the mean square error evaluates the performance of the receiver. Finally, we present an example involving estimating an information bearing sequence that modulates a quantum electromagnetic field incident on a quantum harmonic oscillator.

  11. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.

    PubMed

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-09-09

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  12. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  13. Determination of riverbank erosion probability using Locally Weighted Logistic Regression

    NASA Astrophysics Data System (ADS)

    Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos

    2015-04-01

    Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.

  14. Adaptive x-ray threat detection using sequential hypotheses testing with fan-beam experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.

    2017-05-01

    We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.

  15. Clustering based on adherence data.

    PubMed

    Kiwuwa-Muyingo, Sylvia; Oja, Hannu; Walker, Sarah A; Ilmonen, Pauliina; Levin, Jonathan; Todd, Jim

    2011-03-08

    Adherence to a medical treatment means the extent to which a patient follows the instructions or recommendations by health professionals. There are direct and indirect ways to measure adherence which have been used for clinical management and research. Typically adherence measures are monitored over a long follow-up or treatment period, and some measurements may be missing due to death or other reasons. A natural question then is how to describe adherence behavior over the whole period in a simple way. In the literature, measurements over a period are usually combined just by using averages like percentages of compliant days or percentages of doses taken. In the paper we adapt an approach where patient adherence measures are seen as a stochastic process. Repeated measures are then analyzed as a Markov chain with finite number of states rather than as independent and identically distributed observations, and the transition probabilities between the states are assumed to fully describe the behavior of a patient. The patients can then be clustered or classified using their estimated transition probabilities. These natural clusters can be used to describe the adherence of the patients, to find predictors for adherence, and to predict the future events. The new approach is illustrated and shown to be useful with a simple analysis of a data set from the DART (Development of AntiRetroviral Therapy in Africa) trial in Uganda and Zimbabwe.

  16. A Cross-Sectional Comparison of the Effects of Phonotactic Probability and Neighborhood Density on Word Learning by Preschool Children

    ERIC Educational Resources Information Center

    Hoover, Jill R.; Storkel, Holly L.; Hogan, Tiffany P.

    2010-01-01

    Two experiments examined the effects of phonotactic probability and neighborhood density on word learning by 3-, 4-, and 5-year-old children. Nonwords orthogonally varying in probability and density were taught with learning and retention measured via picture naming. Experiment 1 used a within story probability/across story density exposure…

  17. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  18. Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras.

    PubMed

    Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh

    2008-04-01

    This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.

  19. An automated inner dimensional measurement system based on a laser displacement sensor for long-stepped pipes.

    PubMed

    Zhang, Fumin; Qu, Xinghua; Ouyang, Jianfei

    2012-01-01

    A novel measurement prototype based on a mobile vehicle that carries a laser scanning sensor is proposed. The prototype is intended for the automated measurement of the interior 3D geometry of large-diameter long-stepped pipes. The laser displacement sensor, which has a small measurement range, is mounted on an extended arm of known length. It is scanned to improve the measurement accuracy for large-sized pipes. A fixing mechanism based on two sections is designed to ensure that the stepped pipe is concentric with the axis of rotation of the system. Data are acquired in a cylindrical coordinate system and fitted in a circle to determine diameter. Systematic errors covering arm length, tilt, and offset errors are analyzed and calibrated. The proposed system is applied to sample parts and the results are discussed to verify its effectiveness. This technique measures a diameter of 600 mm with an uncertainty of 0.02 mm at a 95% confidence probability. A repeatability test is performed to examine precision, which is 1.1 μm. A laser tracker is used to verify the measurement accuracy of the system, which is evaluated as 9 μm within a diameter of 600 mm.

  20. Risk perceptions of arsenic in tap water and consumption of bottled water

    NASA Astrophysics Data System (ADS)

    Jakus, Paul M.; Shaw, W. Douglass; Nguyen, To N.; Walker, Mark

    2009-05-01

    The demand for bottled water has increased rapidly over the past decade, but bottled water is extremely costly compared to tap water. The convenience of bottled water surely matters to consumers, but are others factors at work? This manuscript examines whether purchases of bottled water are associated with the perceived risk of tap water. All of the past studies on bottled water consumption have used simple scale measures of perceived risk that do not correspond to risk measures used by risk analysts. We elicit a probability-based measure of risk and find that as perceived risks rise, expenditures for bottled water rise.

  1. Naive Probability: A Mental Model Theory of Extensional Reasoning.

    ERIC Educational Resources Information Center

    Johnson-Laird, P. N.; Legrenzi, Paolo; Girotto, Vittorio; Legrenzi, Maria Sonino; Caverni, Jean-Paul

    1999-01-01

    Outlines a theory of naive probability in which individuals who are unfamiliar with the probability calculus can infer the probabilities of events in an "extensional" way. The theory accommodates reasoning based on numerical premises, and explains how naive reasoners can infer posterior probabilities without relying on Bayes's theorem.…

  2. Interactive Model Visualization for NET-VISA

    NASA Astrophysics Data System (ADS)

    Kuzma, H. A.; Arora, N. S.

    2013-12-01

    NET-VISA is a probabilistic system developed for seismic network processing of data measured on the International Monitoring System (IMS) of the Comprehensive nuclear Test Ban Treaty Organization (CTBTO). NET-VISA is composed of a Generative Model (GM) and an Inference Algorithm (IA). The GM is an explicit mathematical description of the relationships between various factors in seismic network analysis. Some of the relationships inside the GM are deterministic and some are statistical. Statistical relationships are described by probability distributions, the exact parameters of which (such as mean and standard deviation) are found by training NET-VISA using recent data. The IA uses the GM to evaluate the probability of various events and associations, searching for the seismic bulletin which has the highest overall probability and is consistent with a given set of measured arrivals. An Interactive Model Visualization tool (IMV) has been developed which makes 'peeking into' the GM simple and intuitive through a web-based interfaced. For example, it is now possible to access the probability distributions for attributes of events and arrivals such as the detection rate for each station for each of 14 phases. It also clarifies the assumptions and prior knowledge that are incorporated into NET-VISA's event determination. When NET-VISA is retrained, the IMV will be a visual tool for quality control both as a means of testing that the training has been accomplished correctly and that the IMS network has not changed unexpectedly. A preview of the IMV will be shown at this poster presentation. Homepage for the IMV IMV shows current model file and reference image.

  3. A discrimination method for the detection of pneumonia using chest radiograph.

    PubMed

    Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R

    2010-03-01

    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.

  4. Tracking Object Existence From an Autonomous Patrol Vehicle

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.

  5. Lattice Theory, Measures and Probability

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2007-11-01

    In this tutorial, I will discuss the concepts behind generalizing ordering to measuring and apply these ideas to the derivation of probability theory. The fundamental concept is that anything that can be ordered can be measured. Since we are in the business of making statements about the world around us, we focus on ordering logical statements according to implication. This results in a Boolean lattice, which is related to the fact that the corresponding logical operations form a Boolean algebra. The concept of logical implication can be generalized to degrees of implication by generalizing the zeta function of the lattice. The rules of probability theory arise naturally as a set of constraint equations. Through this construction we are able to neatly connect the concepts of order, structure, algebra, and calculus. The meaning of probability is inherited from the meaning of the ordering relation, implication, rather than being imposed in an ad hoc manner at the start.

  6. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  7. Link-Based Similarity Measures Using Reachability Vectors

    PubMed Central

    Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin

    2014-01-01

    We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188

  8. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students

    NASA Astrophysics Data System (ADS)

    Sari, Dwi Ivayana; Budayasa, I. Ketut; Juniati, Dwi

    2017-08-01

    Formulation of mathematical learning goals now is not only oriented on cognitive product, but also leads to cognitive process, which is probabilistic thinking. Probabilistic thinking is needed by students to make a decision. Elementary school students are required to develop probabilistic thinking as foundation to learn probability at higher level. A framework of probabilistic thinking of students had been developed by using SOLO taxonomy, which consists of prestructural probabilistic thinking, unistructural probabilistic thinking, multistructural probabilistic thinking and relational probabilistic thinking. This study aimed to analyze of probability task completion based on taxonomy of probabilistic thinking. The subjects were two students of fifth grade; boy and girl. Subjects were selected by giving test of mathematical ability and then based on high math ability. Subjects were given probability tasks consisting of sample space, probability of an event and probability comparison. The data analysis consisted of categorization, reduction, interpretation and conclusion. Credibility of data used time triangulation. The results was level of boy's probabilistic thinking in completing probability tasks indicated multistructural probabilistic thinking, while level of girl's probabilistic thinking in completing probability tasks indicated unistructural probabilistic thinking. The results indicated that level of boy's probabilistic thinking was higher than level of girl's probabilistic thinking. The results could contribute to curriculum developer in developing probability learning goals for elementary school students. Indeed, teachers could teach probability with regarding gender difference.

  9. Measuring multifractality of stock price fluctuation using multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Yuan, Ying; Zhuang, Xin-tian; Jin, Xiu

    2009-06-01

    Analyzing the Shanghai stock price index daily returns using MF-DFA method, it is found that there are two different types of sources for multifractality in time series, namely, fat-tailed probability distributions and non-linear temporal correlations. Based on that, a sliding window of 240 frequency data in 5 trading days was used to study stock price index fluctuation. It is found that when the stock price index fluctuates sharply, a strong variability is clearly characterized by the generalized Hurst exponents h(q). Therefore, two measures, Δh and σ, based on generalized Hurst exponents were proposed to compare financial risks before and after Price Limits and Reform of Non-tradable Shares. The empirical results verify the validity of the measures, and this has led to a better understanding of complex stock markets.

  10. Determination of the Solar Energy Microclimate of the United States Using Satellite Data

    NASA Technical Reports Server (NTRS)

    Vonderharr, T. H.; Ellis, J. S.

    1978-01-01

    The determination of total solar energy reaching the ground over the United States using measurements from meteorological satellites as the basic data set is examined. The methods of satellite data processing are described. Uncertainty analysis and comparison of results with well calibrated surface pyranometers are used to estimate the probable error in the satellite-based determination of ground insolation. It is 10 to 15 percent for daily information, and about 5 percent for monthly values. However, the natural space and time variability of insolation is much greater than the uncertainty in the method. The most important aspect of the satellite-based technique is the ability to determine the solar energy reaching the ground over small areas where no other measurements are available. Thus, it complements the widely spaced solar radiation measurement network of ground stations.

  11. A cloud detection algorithm using the downwelling infrared radiance measured by an infrared pyrometer of the ground-based microwave radiometer

    DOE PAGES

    Ahn, M. H.; Han, D.; Won, H. Y.; ...

    2015-02-03

    For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less

  12. Validation of the i-STAT system for the analysis of blood gases and acid–base status in juvenile sandbar shark (Carcharhinus plumbeus)

    PubMed Central

    Harter, T. S.; Morrison, P. R.; Mandelman, J. W.; Rummer, J. L.; Farrell, A. P.; Brill, R. W.; Brauner, C. J.

    2015-01-01

    Accurate measurements of blood gases and acid–base status require an array of sophisticated laboratory equipment that is typically not available during field research; such is the case for many studies on the stress physiology, ecology and conservation of elasmobranch fish species. Consequently, researchers have adopted portable clinical analysers that were developed for the analysis of human blood characteristics, but often without thoroughly validating these systems for their use on fish. The aim of our study was to test the suitability of the i-STAT system, the most commonly used portable clinical analyser in studies on fish, for analysing blood gases and acid–base status in elasmobranchs, over a broad range of conditions and using the sandbar shark (Carcharhinus plumbeus) as a model organism. Our results indicate that the i-STAT system can generate useful measurements of whole blood pH, and the use of appropriate correction factors may increase the accuracy of results. The i-STAT system was, however, unable to generate reliable results for measurements of partial pressure of oxygen (PO2) and the derived parameter of haemoglobin O2 saturation. This is probably due to the effect of a closed-system temperature change on PO2 within the i-STAT cartridge and the fact that the temperature correction algorithms used by i-STAT assume a human temperature dependency of haemoglobin–O2 binding; in many ectotherms, this assumption will lead to equivocal i-STAT PO2 results. The in vivo partial pressure of CO2 (PCO2) in resting sandbar sharks is probably below the detection limit for PCO2 in the i-STAT system, and the measurement of higher PCO2 tensions was associated with a large measurement error. In agreement with previous work, our results indicate that the i-STAT system can generate useful data on whole blood pH in fishes, but not blood gases. PMID:27293687

  13. Assessment of Mental, Emotional and Physical Stress through Analysis of Physiological Signals Using Smartphones.

    PubMed

    Mohino-Herranz, Inma; Gil-Pita, Roberto; Ferreira, Javier; Rosa-Zurera, Manuel; Seoane, Fernando

    2015-10-08

    Determining the stress level of a subject in real time could be of special interest in certain professional activities to allow the monitoring of soldiers, pilots, emergency personnel and other professionals responsible for human lives. Assessment of current mental fitness for executing a task at hand might avoid unnecessary risks. To obtain this knowledge, two physiological measurements were recorded in this work using customized non-invasive wearable instrumentation that measures electrocardiogram (ECG) and thoracic electrical bioimpedance (TEB) signals. The relevant information from each measurement is extracted via evaluation of a reduced set of selected features. These features are primarily obtained from filtered and processed versions of the raw time measurements with calculations of certain statistical and descriptive parameters. Selection of the reduced set of features was performed using genetic algorithms, thus constraining the computational cost of the real-time implementation. Different classification approaches have been studied, but neural networks were chosen for this investigation because they represent a good tradeoff between the intelligence of the solution and computational complexity. Three different application scenarios were considered. In the first scenario, the proposed system is capable of distinguishing among different types of activity with a 21.2% probability error, for activities coded as neutral, emotional, mental and physical. In the second scenario, the proposed solution distinguishes among the three different emotional states of neutral, sadness and disgust, with a probability error of 4.8%. In the third scenario, the system is able to distinguish between low mental load and mental overload with a probability error of 32.3%. The computational cost was calculated, and the solution was implemented in commercially available Android-based smartphones. The results indicate that execution of such a monitoring solution is negligible compared to the nominal computational load of current smartphones.

  14. Logistic regression model for detecting radon prone areas in Ireland.

    PubMed

    Elío, J; Crowley, Q; Scanlon, R; Hodgson, J; Long, S

    2017-12-01

    A new high spatial resolution radon risk map of Ireland has been developed, based on a combination of indoor radon measurements (n=31,910) and relevant geological information (i.e. Bedrock Geology, Quaternary Geology, soil permeability and aquifer type). Logistic regression was used to predict the probability of having an indoor radon concentration above the national reference level of 200Bqm -3 in Ireland. The four geological datasets evaluated were found to be statistically significant, and, based on combinations of these four variables, the predicted probabilities ranged from 0.57% to 75.5%. Results show that the Republic of Ireland may be divided in three main radon risk categories: High (HR), Medium (MR) and Low (LR). The probability of having an indoor radon concentration above 200Bqm -3 in each area was found to be 19%, 8% and 3%; respectively. In the Republic of Ireland, the population affected by radon concentrations above 200Bqm -3 is estimated at ca. 460k (about 10% of the total population). Of these, 57% (265k), 35% (160k) and 8% (35k) are in High, Medium and Low Risk Areas, respectively. Our results provide a high spatial resolution utility which permit customised radon-awareness information to be targeted at specific geographic areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Identification of Thyroid Receptor Ant/Agonists in Water Sources Using Mass Balance Analysis and Monte Carlo Simulation

    PubMed Central

    Shi, Wei; Wei, Si; Hu, Xin-xin; Hu, Guan-jiu; Chen, Cu-lan; Wang, Xin-ru; Giesy, John P.; Yu, Hong-xia

    2013-01-01

    Some synthetic chemicals, which have been shown to disrupt thyroid hormone (TH) function, have been detected in surface waters and people have the potential to be exposed through water-drinking. Here, the presence of thyroid-active chemicals and their toxic potential in drinking water sources in Yangtze River Delta were investigated by use of instrumental analysis combined with cell-based reporter gene assay. A novel approach was developed to use Monte Carlo simulation, for evaluation of the potential risks of measured concentrations of TH agonists and antagonists and to determine the major contributors to observed thyroid receptor (TR) antagonist potency. None of the extracts exhibited TR agonist potency, while 12 of 14 water samples exhibited TR antagonistic potency. The most probable observed antagonist equivalents ranged from 1.4 to 5.6 µg di-n-butyl phthalate (DNBP)/L, which posed potential risk in water sources. Based on Monte Carlo simulation related mass balance analysis, DNBP accounted for 64.4% for the entire observed antagonist toxic unit in water sources, while diisobutyl phthalate (DIBP), di-n-octyl phthalate (DNOP) and di-2-ethylhexyl phthalate (DEHP) also contributed. The most probable observed equivalent and most probable relative potency (REP) derived from Monte Carlo simulation is useful for potency comparison and responsible chemicals screening. PMID:24204563

  16. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  17. 76 FR 74699 - Allocation of Assets in Single-Employer Plans; Valuation of Benefits and Assets; Expected...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... a low, medium, or high probability of retiring early. The determination is based on the year a... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...

  18. 75 FR 74622 - Allocation of Assets in Single-Employer Plans; Valuation of Benefits and Assets; Expected...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ..., medium, or high probability of retiring early. The determination is based on the year a participant would... the expected retirement age after the probability of early retirement has been determined using Table I. These tables establish, by probability category, the expected retirement age based on both the...

  19. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  20. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  1. Determination of the measurement threshold in gamma-ray spectrometry.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2017-03-01

    In gamma-ray spectrometry the measurement threshold describes the lover boundary of the interval of peak areas originating in the response of the spectrometer to gamma-rays from the sample measured. In this sense it presents a generalization of the net indication corresponding to the decision threshold, which is the measurement threshold at the quantity value zero for a predetermined probability for making errors of the first kind. Measurement thresholds were determined for peaks appearing in the spectra of radon daughters 214 Pb and 214 Bi by measuring the spectrum 35 times under repeatable conditions. For the calculation of the measurement threshold the probability for detection of the peaks and the mean relative uncertainty of the peak area were used. The relative measurement thresholds, the ratios between the measurement threshold and the mean peak area uncertainty, were determined for 54 peaks where the probability for detection varied between some percent and about 95% and the relative peak area uncertainty between 30% and 80%. The relative measurement thresholds vary considerably from peak to peak, although the nominal value of the sensitivity parameter defining the sensitivity for locating peaks was equal for all peaks. At the value of the sensitivity parameter used, the peak analysis does not locate peaks corresponding to the decision threshold with the probability in excess of 50%. This implies that peaks in the spectrum may not be located, although the true value of the measurand exceeds the decision threshold. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Success Probability Analysis for Shuttle Based Microgravity Experiments

    NASA Technical Reports Server (NTRS)

    Liou, Ying-Hsin Andrew

    1996-01-01

    Presented in this report are the results of data analysis of shuttle-based microgravity flight experiments. Potential factors were identified in the previous grant period, and in this period 26 factors were selected for data analysis. In this project, the degree of success was developed and used as the performance measure. 293 of the 391 experiments in Lewis Research Center Microgravity Database were assigned degrees of success. The frequency analysis and the analysis of variance were conducted to determine the significance of the factors that effect the experiment success.

  3. Do Glucose and Caffeine Nasal Sprays Influence Exercise or Cognitive Performance?

    PubMed

    De Pauw, Kevin; Roelands, Bart; Van Cutsem, Jeroen; Decroix, Lieselot; Valente, Angelica; Taehee, Kim; Lettan, Robert B; Carrillo, Andres E; Meeusen, Romain

    2017-10-01

    Nasal spray (NAS) containing caffeine (CAF) or glucose (GLUC) activates sensory(motor) cortices. To investigate the influence of CAF or GLUC NAS on exercise and cognitive performance. Eleven male subjects (age 22 ± 2 y) performed a maximal cycle test and 2 familiarization and 3 experimental trials. Each trial included a 30-s Wingate test and a 30-min time-trial (TT) performance test interspersed by 15 min of rest. Before and after each exercise test a Stroop task was conducted. Placebo NAS with or without CAF or GLUC was provided before each exercise session and at each completed 25% of the TT. Exercise-performance, physiological, and cognitive measures were obtained. Magnitude-based inferences determined the likelihood that NAS solutions would be beneficial, trivial, or negative to exercise-performance measures based on the smallest worthwhile effect. Physiological and cognitive measures were analyzed using (non)parametric tests (P < .05). GLUC NAS substantially increased the average power output during the TT (very likely beneficial: 98%). No further worthwhile exercise-performance enhancements were found for both substances. In addition, no significant differences in physiological and cognitive measures were observed. In line with mouth rinsing, GLUC was shown to substantially enhance endurance performance, probably due to the activation of the olfactory pathway and/or extra-oral sweet-taste receptors. GLUC NAS enhances endurance performance, which indicates a novel administration route. The higher activity in sensory brain cortices probably elicited the ergogenic effect. However, no further physiological and cognitive changes occurred, indicating that higher doses of substrates might be required.

  4. Uniform California earthquake rupture forecast, version 2 (UCERF 2)

    USGS Publications Warehouse

    Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.

    2009-01-01

    The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.

  5. Bayesian Estimation of Small Effects in Exercise and Sports Science.

    PubMed

    Mengersen, Kerrie L; Drovandi, Christopher C; Robert, Christian P; Pyne, David B; Gore, Christopher J

    2016-01-01

    The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a 'magnitude-based inference' approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

  6. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks.

    PubMed

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W C; Cao, Jinde

    2015-08-28

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results.

  7. Synchronization Analysis of Master-Slave Probabilistic Boolean Networks

    PubMed Central

    Lu, Jianquan; Zhong, Jie; Li, Lulu; Ho, Daniel W. C.; Cao, Jinde

    2015-01-01

    In this paper, we analyze the synchronization problem of master-slave probabilistic Boolean networks (PBNs). The master Boolean network (BN) is a deterministic BN, while the slave BN is determined by a series of possible logical functions with certain probability at each discrete time point. In this paper, we firstly define the synchronization of master-slave PBNs with probability one, and then we investigate synchronization with probability one. By resorting to new approach called semi-tensor product (STP), the master-slave PBNs are expressed in equivalent algebraic forms. Based on the algebraic form, some necessary and sufficient criteria are derived to guarantee synchronization with probability one. Further, we study the synchronization of master-slave PBNs in probability. Synchronization in probability implies that for any initial states, the master BN can be synchronized by the slave BN with certain probability, while synchronization with probability one implies that master BN can be synchronized by the slave BN with probability one. Based on the equivalent algebraic form, some efficient conditions are derived to guarantee synchronization in probability. Finally, several numerical examples are presented to show the effectiveness of the main results. PMID:26315380

  8. Affective and cognitive factors influencing sensitivity to probabilistic information.

    PubMed

    Tyszka, Tadeusz; Sawicki, Przemyslaw

    2011-11-01

    In study 1 different groups of female students were randomly assigned to one of four probabilistic information formats. Five different levels of probability of a genetic disease in an unborn child were presented to participants (within-subject factor). After the presentation of the probability level, participants were requested to indicate the acceptable level of pain they would tolerate to avoid the disease (in their unborn child), their subjective evaluation of the disease risk, and their subjective evaluation of being worried by this risk. The results of study 1 confirmed the hypothesis that an experience-based probability format decreases the subjective sense of worry about the disease, thus, presumably, weakening the tendency to overrate the probability of rare events. Study 2 showed that for the emotionally laden stimuli, the experience-based probability format resulted in higher sensitivity to probability variations than other formats of probabilistic information. These advantages of the experience-based probability format are interpreted in terms of two systems of information processing: the rational deliberative versus the affective experiential and the principle of stimulus-response compatibility. © 2011 Society for Risk Analysis.

  9. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  10. QoS measurement of workflow-based web service compositions using Colored Petri net.

    PubMed

    Nematzadeh, Hossein; Motameni, Homayun; Mohamad, Radziah; Nematzadeh, Zahra

    2014-01-01

    Workflow-based web service compositions (WB-WSCs) is one of the main composition categories in service oriented architecture (SOA). Eflow, polymorphic process model (PPM), and business process execution language (BPEL) are the main techniques of the category of WB-WSCs. Due to maturity of web services, measuring the quality of composite web services being developed by different techniques becomes one of the most important challenges in today's web environments. Business should try to provide good quality regarding the customers' requirements to a composed web service. Thus, quality of service (QoS) which refers to nonfunctional parameters is important to be measured since the quality degree of a certain web service composition could be achieved. This paper tried to find a deterministic analytical method for dependability and performance measurement using Colored Petri net (CPN) with explicit routing constructs and application of theory of probability. A computer tool called WSET was also developed for modeling and supporting QoS measurement through simulation.

  11. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  12. ProbOnto: ontology and knowledge base of probability distributions.

    PubMed

    Swat, Maciej J; Grenon, Pierre; Wimalaratne, Sarala

    2016-09-01

    Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. http://probonto.org mjswat@ebi.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  14. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  15. Alternative probability theories for cognitive psychology.

    PubMed

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  16. Measurement of Two- and Three-Nucleon Short-Range Correlation Probabilities in Nuclei

    NASA Astrophysics Data System (ADS)

    Egiyan, K. S.; Dashyan, N. B.; Sargsian, M. M.; Strikman, M. I.; Weinstein, L. B.; Adams, G.; Ambrozewicz, P.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Avakian, H.; Baghdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Batourine, V.; Battaglieri, M.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Biselli, A. S.; Bonner, B. E.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Bultuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Chen, S.; Cole, P. L.; Coltharp, P.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Sanctis, E. De; Devita, R.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Dharmawardane, K. V.; Djalali, C.; Dodge, G. E.; Donnelly, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gevorgyan, N. G.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hardie, J.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Hu, J.; Huertas, M.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Joo, K.; Juengst, H. G.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A.; Klusman, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Livingston, K.; Maximon, L. C.; McAleer, S.; McKinnon, B.; McNabb, J. W.; Mecking, B. A.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morrow, S. A.; Mueller, J.; Mutchler, G. S.; Nadel-Turonski, P.; Napolitano, J.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Pasyuk, E.; Peterson, C.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Suleiman, R.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thompson, R.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, J.

    2006-03-01

    The ratios of inclusive electron scattering cross sections of 4He, 12C, and 56Fe to 3He have been measured at 11.4 GeV2, the ratios exhibit two separate plateaus, at 1.52.25. This pattern is predicted by models that include 2- and 3-nucleon short-range correlations (SRC). Relative to A=3, the per-nucleon probabilities of 3-nucleon SRC are 2.3, 3.1, and 4.4 times larger for A=4, 12, and 56. This is the first measurement of 3-nucleon SRC probabilities in nuclei.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Albert F., E-mail: wagner@anl.gov; Dawes, Richard; Continetti, Robert E.

    The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO{sub 2} on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experimentmore » occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.« less

  18. MO-AB-BRA-04: Radiation Measurements with a DNA Double-Strand-Break Dosimeter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obeidat, M; Cline, K; Stathakis, S

    Purpose: Many types of dosimeters are used to measure radiation, but none of them directly measures the biological effect of this dose. The purpose here is to create a dosimeter that can measure the probability of double-strand breaks (DSB) for DNA, which is directly related to the biological effect of radiation. Methods: The dosimeter has DNA strands, which are labeled on one end with biotin and on the other with fluorescein. The biotin attaches these strands to magnetic beads. We suspended the DNA dosimeter in phosphate-buffered saline (PBS) as it matches the internal environment of the body. We placed smallmore » volumes (50µL) of the DNA dosimeter into tubes and irradiated these samples in a water-equivalent plastic phantom with several doses (three samples per dose). After irradiating the samples, a magnet was placed against the tubes. The fluorescein attached to broken DNA strands was extracted (called the supernatant) and placed into a different tube. The fluorescein on the unbroken strands remained attached to the beads in the tube and was re-suspended with 50µL of PBS. A fluorescence reader was used to measure the fluorescence for both the re-suspended beads and supernatant. To prove that we are measuring DSB, we tested dosimeter response with two different lengths of attached DNA strands (1 and 4 kilo-base pair). Results: The probability of DSB at the dose levels of 5, 10, 25, and 50 Gy were 0.05, 0.08, 0.12, and 0.19, respectively, while the coefficients of variation were 0.14, 0.07, 0.02, and 0.01, respectively. The 4 kilo-base-pair dosimeter produced 5.3 times the response of the 1 kilo-base-pair dosimeter. Conclusion: The DNA dosimeter yields a measurable response to dose that scales with the DNA strand length. The goal now is to refine the dosimeter fabrication to reproducibly create a low coefficient of variation for the lower doses. This work was supported in part by Yarmouk University (Irbid, Jordan) and CPRIT (RP140105)« less

  19. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  20. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

Top