Science.gov

Sample records for realistic probability estimates

  1. Realistic Probability Estimates For Destructive Overpressure Events In Heated Center Wing Tanks Of Commercial Jet Aircraft

    SciTech Connect

    Alvares, N; Lambert, H

    2007-02-07

    The Federal Aviation Administration (FAA) identified 17 accidents that may have resulted from fuel tank explosions on commercial aircraft from 1959 to 2001. Seven events involved JP 4 or JP 4/Jet A mixtures that are no longer used for commercial aircraft fuel. The remaining 10 events involved Jet A or Jet A1 fuels that are in current use by the commercial aircraft industry. Four fuel tank explosions occurred in center wing tanks (CWTs) where on-board appliances can potentially transfer heat to the tank. These tanks are designated as ''Heated Center Wing Tanks'' (HCWT). Since 1996, the FAA has significantly increased the rate at which it has mandated airworthiness directives (ADs) directed at elimination of ignition sources. This effort includes the adoption, in 2001, of Special Federal Aviation Regulation 88 of 14 CFR part 21 (SFAR 88 ''Fuel Tank System Fault Tolerance Evaluation Requirements''). This paper addresses SFAR 88 effectiveness in reducing HCWT ignition source probability. Our statistical analysis, relating the occurrence of both on-ground and in-flight HCWT explosions to the cumulative flight hours of commercial passenger aircraft containing HCWT's reveals that the best estimate of HCWT explosion rate is 1 explosion in 1.4 x 10{sup 8} flight hours. Based on an analysis of SFAR 88 by Sandia National Laboratories and our independent analysis, SFAR 88 reduces current risk of historical HCWT explosion by at least a factor of 10, thus meeting an FAA risk criteria of 1 accident in billion flight hours. This paper also surveys and analyzes parameters for Jet A fuel ignition in HCWT's. Because of the paucity of in-flight HCWT explosions, we conclude that the intersection of the parameters necessary and sufficient to result in an HCWT explosion with sufficient overpressure to rupture the HCWT is extremely rare.

  2. Estimating tail probabilities

    SciTech Connect

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  3. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  4. The quantitative estimation of IT-related risk probabilities.

    PubMed

    Herrmann, Andrea

    2013-08-01

    How well can people estimate IT-related risk? Although estimating risk is a fundamental activity in software management and risk is the basis for many decisions, little is known about how well IT-related risk can be estimated at all. Therefore, we executed a risk estimation experiment with 36 participants. They estimated the probabilities of IT-related risks and we investigated the effect of the following factors on the quality of the risk estimation: the estimator's age, work experience in computing, (self-reported) safety awareness and previous experience with this risk, the absolute value of the risk's probability, and the effect of knowing the estimates of the other participants (see: Delphi method). Our main findings are: risk probabilities are difficult to estimate. Younger and inexperienced estimators were not significantly worse than older and more experienced estimators, but the older and more experienced subjects better used the knowledge gained by knowing the other estimators' results. Persons with higher safety awareness tend to overestimate risk probabilities, but can better estimate ordinal ranks of risk probabilities. Previous own experience with a risk leads to an overestimation of its probability (unlike in other fields like medicine or disasters, where experience with a disease leads to more realistic probability estimates and nonexperience to an underestimation).

  5. Cold and hot cognition: quantum probability theory and realistic psychological modeling.

    PubMed

    Corr, Philip J

    2013-06-01

    Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma).

  6. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  7. Class probability estimation for medical studies.

    PubMed

    Simon, Richard

    2014-07-01

    I provide a commentary on two papers "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler. Those papers provide an up-to-date review of some popular machine learning methods for class probability estimation and compare those methods to logistic regression modeling in real and simulated datasets.

  8. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  9. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  10. Realistic neurons can compute the operations needed by quantum probability theory and other vector symbolic architectures.

    PubMed

    Stewart, Terrence C; Eliasmith, Chris

    2013-06-01

    Quantum probability (QP) theory can be seen as a type of vector symbolic architecture (VSA): mental states are vectors storing structured information and manipulated using algebraic operations. Furthermore, the operations needed by QP match those in other VSAs. This allows existing biologically realistic neural models to be adapted to provide a mechanistic explanation of the cognitive phenomena described in the target article by Pothos & Busemeyer (P&B).

  11. Conflict Probability Estimation for Free Flight

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Heinz

    1996-01-01

    The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.

  12. Estimating flood exceedance probabilities in estuarine regions

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Leonard, Michael

    2016-04-01

    Flood events in estuarine regions can arise from the interaction of extreme rainfall and storm surge. Determining flood level exceedance probabilities in these regions is complicated by the dependence of these processes for extreme events. A comprehensive study of tide and rainfall gauges along the Australian coastline was conducted to determine the dependence of these extremes using a bivariate logistic threshold-excess model. The dependence strength is shown to vary as a function of distance over many hundreds of kilometres indicating that the dependence arises due to synoptic scale meteorological forcings. It is also shown to vary as a function of storm burst duration, time lag between the extreme rainfall and the storm surge event. The dependence estimates are then used with a bivariate design variable method to determine flood risk in estuarine regions for a number of case studies. Aspects of the method demonstrated in the case studies include, the resolution and range of the hydraulic response table, fitting of probability distributions, computational efficiency, uncertainty, potential variation in marginal distributions due to climate change, and application to two dimensional output from hydraulic models. Case studies are located on the Swan River (Western Australia), Nambucca River and Hawkesbury Nepean River (New South Wales).

  13. SIMULTANEOUS ESTIMATION OF PHOTOMETRIC REDSHIFTS AND SED PARAMETERS: IMPROVED TECHNIQUES AND A REALISTIC ERROR BUDGET

    SciTech Connect

    Acquaviva, Viviana; Raichoor, Anand

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  14. A comparison of tail probability estimators for flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Moon, Young-Il; Lall, Upmanu; Bosworth, Ken

    1993-11-01

    Selected techniques for estimating exceedance frequencies of annual maximum flood events at a gaged site are compared in this paper. Four tail probability estimators proposed by Hill (PT1), Hosking and Wallis (PT2) and by Breiman and Stone (ET and QT), and a variable kernel distribution function estimator (VK-C-AC) were compared for three situations — Gaussian data, skewed data (three-parameter gamma) and Gaussian mixture data. The performance of these estimators was compared with method of moment estimates of tail probabilities, using the Gaussian, Pearson Type III, and extreme value distributions. Since the results of the tail probability estimators (PT1, PT2, ET, QT) varied according to the situation, it is not easy to say which tail probability estimator is the best. However, the performance of the variable kernel estimator was relatively consistent across the estimation situations considered in terms of bias and r.m.s.e.

  15. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  16. A new method for estimating extreme rainfall probabilities

    SciTech Connect

    Harper, G.A.; O'Hara, T.F. ); Morris, D.I. )

    1994-02-01

    As part of an EPRI-funded research program, the Yankee Atomic Electric Company developed a new method for estimating probabilities of extreme rainfall. It can be used, along with other techniques, to improve the estimation of probable maximum precipitation values for specific basins or regions.

  17. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  18. Bayes estimate of the probability of exceedance of annual floods

    NASA Astrophysics Data System (ADS)

    Lye, L. M.

    1990-03-01

    In this paper Lindley's Bayesian approximation procedure is used to obtain the Bayes estimate of the probability of exceedence of a flood discharge. The Bayes estimates of the probability of exceedence has been shown by S.K. Sinha to be equivalent to the estimate of the probability of exceedence from the predictive or Bayesian disribution, of a future flood discharge. The evaluation of complex ratios of multiple integrals common in a Bayesian analysis is not necessary using Lindley's procedure. The Bayes estimates are compared to those obtained by the method of maximum likelihood and the method of moments. The results show that Bayes estimates of the probability of exceedence are larger as expected, but have smaller posterior standard deviations.

  19. Estimating the exceedance probability of extreme rainfalls up to the probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Nathan, Rory; Jordan, Phillip; Scorah, Matthew; Lang, Simon; Kuczera, George; Schaefer, Melvin; Weinmann, Erwin

    2016-12-01

    If risk-based criteria are used in the design of high hazard structures (such as dam spillways and nuclear power stations), then it is necessary to estimate the annual exceedance probability (AEP) of extreme rainfalls up to and including the Probable Maximum Precipitation (PMP). This paper describes the development and application of two largely independent methods to estimate the frequencies of such extreme rainfalls. One method is based on stochastic storm transposition (SST), which combines the "arrival" and "transposition" probabilities of an extreme storm using the total probability theorem. The second method, based on "stochastic storm regression" (SSR), combines frequency curves of point rainfalls with regression estimates of local and transposed areal rainfalls; rainfall maxima are generated by stochastically sampling the independent variates, where the required exceedance probabilities are obtained using the total probability theorem. The methods are applied to two large catchments (with areas of 3550 km2 and 15,280 km2) located in inland southern Australia. Both methods were found to provide similar estimates of the frequency of extreme areal rainfalls for the two study catchments. The best estimates of the AEP of the PMP for the smaller and larger of the catchments were found to be 10-7 and 10-6, respectively, but the uncertainty of these estimates spans one to two orders of magnitude. Additionally, the SST method was applied to a range of locations within a meteorologically homogenous region to investigate the nature of the relationship between the AEP of PMP and catchment area.

  20. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  1. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.

  2. Estimating the probability of failure when testing reveals no failures

    NASA Technical Reports Server (NTRS)

    Miller, Keith W.; Morell, Larry J.; Noonan, Robert E.; Park, Stephen K.; Nicol, David M.; Murrill, Branson W.; Voas, Jeffrey M.

    1992-01-01

    Formulas for estimating the probability of failure when testing reveals no errors are introduced. These formulas incorporate random testing results, information about the input distribution, and prior assumptions about the probability of failure of the software. The formulas are not restricted to equally likely input distributions, and the probability of failure estimate can be adjusted when assumptions about the input distribution change. The formulas are based on a discrete sample space statistical model of software and include Bayesian prior assumptions. Reusable software and software in life-critical applications are particularly appropriate candidates for this type of analysis.

  3. 27% Probable: Estimating Whether or Not Large Numbers Are Prime.

    ERIC Educational Resources Information Center

    Bosse, Michael J.

    2001-01-01

    This brief investigation exemplifies such considerations by relating concepts from number theory, set theory, probability, logic, and calculus. Satisfying the call for students to acquire skills in estimation, the following technique allows one to "immediately estimate" whether or not a number is prime. (MM)

  4. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  5. Local estimation of posterior class probabilities to minimize classification errors.

    PubMed

    Guerrero-Curieses, Alicia; Cid-Sueiro, Jesús; Alaiz-Rodríguez, Rocío; Figueiras-Vidal, Aníbal R

    2004-03-01

    Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.

  6. Estimation of transition probabilities of credit ratings for several companies

    NASA Astrophysics Data System (ADS)

    Peng, Gan Chew; Hin, Pooi Ah

    2016-10-01

    This paper attempts to estimate the transition probabilities of credit ratings for a number of companies whose ratings have a dependence structure. Binary codes are used to represent the index of a company together with its ratings in the present and next quarters. We initially fit the data on the vector of binary codes with a multivariate power-normal distribution. We next compute the multivariate conditional distribution for the binary codes of rating in the next quarter when the index of the company and binary codes of the company in the present quarter are given. From the conditional distribution, we compute the transition probabilities of the company's credit ratings in two consecutive quarters. The resulting transition probabilities tally fairly well with the maximum likelihood estimates for the time-independent transition probabilities.

  7. Estimating the probability of rare events: addressing zero failure data.

    PubMed

    Quigley, John; Revie, Matthew

    2011-07-01

    Traditional statistical procedures for estimating the probability of an event result in an estimate of zero when no events are realized. Alternative inferential procedures have been proposed for the situation where zero events have been realized but often these are ad hoc, relying on selecting methods dependent on the data that have been realized. Such data-dependent inference decisions violate fundamental statistical principles, resulting in estimation procedures whose benefits are difficult to assess. In this article, we propose estimating the probability of an event occurring through minimax inference on the probability that future samples of equal size realize no more events than that in the data on which the inference is based. Although motivated by inference on rare events, the method is not restricted to zero event data and closely approximates the maximum likelihood estimate (MLE) for nonzero data. The use of the minimax procedure provides a risk adverse inferential procedure where there are no events realized. A comparison is made with the MLE and regions of the underlying probability are identified where this approach is superior. Moreover, a comparison is made with three standard approaches to supporting inference where no event data are realized, which we argue are unduly pessimistic. We show that for situations of zero events the estimator can be simply approximated with 1/2.5n, where n is the number of trials.

  8. Estimating the empirical probability of submarine landslide occurrence

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  9. Estimating Prior Model Probabilities Using an Entropy Principle

    NASA Astrophysics Data System (ADS)

    Ye, M.; Meyer, P. D.; Neuman, S. P.; Pohlmann, K.

    2004-12-01

    Considering conceptual model uncertainty is an important process in environmental uncertainty/risk analyses. Bayesian Model Averaging (BMA) (Hoeting et al., 1999) and its Maximum Likelihood version, MLBMA, (Neuman, 2003) jointly assess predictive uncertainty of competing alternative models to avoid bias and underestimation of uncertainty caused by relying on one single model. These methods provide posterior distribution (or, equivalently, leading moments) of quantities of interests for decision-making. One important step of these methods is to specify prior probabilities of alternative models for the calculation of posterior model probabilities. This problem, however, has not been satisfactorily resolved and equally likely prior model probabilities are usually accepted as a neutral choice. Ye et al. (2004) have shown that whereas using equally likely prior model probabilities has led to acceptable geostatistical estimates of log air permeability data from fractured unsaturated tuff at the Apache Leap Research Site (ALRS) in Arizona, identifying more accurate prior probabilities can improve these estimates. In this paper we present a new methodology to evaluate prior model probabilities by maximizing Shannon's entropy with restrictions postulated a priori based on model plausibility relationships. It yields optimum prior model probabilities conditional on prior information used to postulate the restrictions. The restrictions and corresponding prior probabilities can be modified as more information becomes available. The proposed method is relatively easy to use in practice as it is generally less difficult for experts to postulate relationships between models than to specify numerical prior model probability values. Log score, mean square prediction error (MSPE) and mean absolute predictive error (MAPE) criteria consistently show that applying our new method to the ALRS data reduces geostatistical estimation errors provided relationships between models are

  10. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  11. An application of recurrent nets to phone probability estimation.

    PubMed

    Robinson, A J

    1994-01-01

    This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation.

  12. Expected probability weighted moment estimator for censored flood data

    NASA Astrophysics Data System (ADS)

    Jeon, Jong-June; Kim, Young-Oh; Kim, Yongdai

    2011-08-01

    Two well-known methods for estimating statistical distributions in hydrology are the Method of Moments (MOMs) and the method of probability weighted moments (PWM). This paper is concerned with the case where a part of the sample is censored. One situation where this might occur is when systematic data (e.g. from gauges) are combined with historical data, since the latter are often only reported if they exceed a high threshold. For this problem, three previously derived estimators are the "B17B" estimator, which is a direct modification of MOM to allow for partial censoring; the "partial PWM estimator", which similarly modifies PWM; and the "expected moments algorithm" estimator, which improves on B17B by replacing a sample adjustment of the censored-data moments with a population adjustment. The present paper proposes a similar modification to the PWM estimator, resulting in the "expected probability weighted moments (EPWM)" estimator. Simulation comparisons of these four estimators and also the maximum likelihood estimator show that the EPWM method is at least competitive with the other four and in many cases the best of the five estimators.

  13. Improving estimates of tree mortality probability using potential growth rate

    USGS Publications Warehouse

    Das, Adrian J.; Stephenson, Nathan L.

    2015-01-01

    Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.

  14. Using Correlation to Compute Better Probability Estimates in Plan Graphs

    NASA Technical Reports Server (NTRS)

    Bryce, Daniel; Smith, David E.

    2006-01-01

    Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.

  15. Recursive estimation of prior probabilities using the mixture approach

    NASA Technical Reports Server (NTRS)

    Kazakos, D.

    1974-01-01

    The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable.

  16. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  17. Estimating transition probabilities in unmarked populations --entropy revisited

    USGS Publications Warehouse

    Cooch, E.G.; Link, W.A.

    1999-01-01

    The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.

  18. Collective animal behavior from Bayesian estimation and probability matching.

    PubMed

    Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G

    2011-11-01

    Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.

  19. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  20. Estimating probable flaw distributions in PWR steam generator tubes

    SciTech Connect

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.

  1. Conditional Probability Density Functions Arising in Bearing Estimation

    DTIC Science & Technology

    1994-05-01

    and a better known performance measure: the Cramer-Rao bound . 14. SUMECT TEm IL5 NUlMN OF PAMES Probability Density Function, bearing angle estimation...results obtained using the calculated density functions and a better known performance measure: the Cramer-Rao bound . The major results obtained are as...48 15. Sampling Inteval , Propagation Delay, and Covariance Singularities ....... 52 viii List of Figures (continued

  2. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  3. Probability model for estimating colorectal polyp progression rates.

    PubMed

    Gopalappa, Chaitra; Aydogan-Cremaschi, Selen; Das, Tapas K; Orcun, Seza

    2011-03-01

    According to the American Cancer Society, colorectal cancer (CRC) is the third most common cause of cancer related deaths in the United States. Experts estimate that about 85% of CRCs begin as precancerous polyps, early detection and treatment of which can significantly reduce the risk of CRC. Hence, it is imperative to develop population-wide intervention strategies for early detection of polyps. Development of such strategies requires precise values of population-specific rates of incidence of polyp and its progression to cancerous stage. There has been a considerable amount of research in recent years on developing screening based CRC intervention strategies. However, these are not supported by population-specific mathematical estimates of progression rates. This paper addresses this need by developing a probability model that estimates polyp progression rates considering race and family history of CRC; note that, it is ethically infeasible to obtain polyp progression rates through clinical trials. We use the estimated rates to simulate the progression of polyps in the population of the State of Indiana, and also the population of a clinical trial conducted in the State of Minnesota, which was obtained from literature. The results from the simulations are used to validate the probability model.

  4. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum

  5. Estimation of probability densities using scale-free field theories.

    PubMed

    Kinney, Justin B

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  6. Cost functions to estimate a posteriori probabilities in multiclass problems.

    PubMed

    Cid-Sueiro, J; Arribas, J I; Urbán-Muñoz, S; Figueiras-Vidal, A R

    1999-01-01

    The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions; those which verify two usually interesting properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions.

  7. Estimation of probability densities using scale-free field theories

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2014-07-01

    The question of how best to estimate a continuous probability density from finite data is an intriguing open problem at the interface of statistics and physics. Previous work has argued that this problem can be addressed in a natural way using methods from statistical field theory. Here I describe results that allow this field-theoretic approach to be rapidly and deterministically computed in low dimensions, making it practical for use in day-to-day data analysis. Importantly, this approach does not impose a privileged length scale for smoothness of the inferred probability density, but rather learns a natural length scale from the data due to the tradeoff between goodness of fit and an Occam factor. Open source software implementing this method in one and two dimensions is provided.

  8. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.

    PubMed

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2016-03-01

    Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.

  9. Estimating transition probabilities among everglades wetland communities using multistate models

    USGS Publications Warehouse

    Hotaling, A.S.; Martin, J.; Kitchens, W.M.

    2009-01-01

    In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.

  10. Geometrical order-of-magnitude estimates for spatial curvature in realistic models of the Universe

    NASA Astrophysics Data System (ADS)

    Buchert, Thomas; Ellis, George F. R.; van Elst, Henk

    2009-09-01

    The thoughts expressed in this article are based on remarks made by Jürgen Ehlers at the Albert-Einstein-Institut, Golm, Germany in July 2007. The main objective of this article is to demonstrate, in terms of plausible order-of-magnitude estimates for geometrical scalars, the relevance of spatial curvature in realistic models of the Universe that describe the dynamics of structure formation since the epoch of matter-radiation decoupling. We introduce these estimates with a commentary on the use of a quasi-Newtonian metric form in this context.

  11. A software for the estimation of binding parameters of biochemical equilibria based on statistical probability model.

    PubMed

    Fisicaro, E; Braibanti, A; Sambasiva Rao, R; Compari, C; Ghiozzi, A; Nageswara Rao, G

    1998-04-01

    An algorithm is proposed for the estimation of binding parameters for the interaction of biologically important macromolecules with smaller ones from electrometric titration data. The mathematical model is based on the representation of equilibria in terms of probability concepts of statistical molecular thermodynamics. The refinement of equilibrium concentrations of the components and estimation of binding parameters (log site constant and cooperativity factor) is performed using singular value decomposition, a chemometric technique which overcomes the general obstacles due to near singularity. The present software is validated with a number of biochemical systems of varying number of sites and cooperativity factors. The effect of random errors of realistic magnitude in experimental data is studied using the simulated primary data for some typical systems. The safe area within which approximate binding parameters ensure convergence has been reported for the non-self starting optimization algorithms.

  12. Estimation of the orientation of short lines with a realistic population of cortical neurons

    NASA Astrophysics Data System (ADS)

    Shokhirev, Kirill Nikolai

    The inhomogeneous distribution of the receptive fields of cortical neurons influences the cortical representation of the orientation of short lines seen in visual images. A model of the response of populations of neurons in the human primary visual cortex is constructed by combining realistic response properties of individual neurons and cortical maps of orientation and location preferences. The encoding error characterizes the difference between a visual stimulus and its cortical representation, and is calculated using Fisher information, as the square root of the variance of a statistically efficient estimator. The error of encoding orientation varies with the location and orientation of the short line stimulus as modulated by the underlying orientation preference map. The average encoding error depends weakly on the structure of the orientation preference map and is smaller than the human error of estimating orientation. From this comparison I conclude that the actual mechanism of orientation perception does not make efficient use of all the information available in the neuronal responses and that the decoding of visual information from neuronal responses limits psychophysical performance. Two forms of a population vector (PV) estimator were used to test if a simple estimation mechanism can account for the human accuracy of estimation of the orientation. The canonical PV estimator is similar to the models proposed previously for estimation of movement direction in the motor cortex. The "normalized" PV estimator has coefficients scaled by the local density of neurons in the space of preferred parameters. The average variance of either estimator does not increase appreciably when a realistic distribution is used instead of a random distribution of preferred orientations and remains significantly below the psychophysical threshold. However, the bias of the canonical PV estimator increases by approximately a factor of 15 compared with the random distribution. The

  13. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  14. Automated estimation of rare event probabilities in biochemical systems

    PubMed Central

    Daigle, Bernie J.; Roh, Min K.; Gillespie, Dan T.; Petzold, Linda R.

    2011-01-01

    In biochemical systems, the occurrence of a rare event can be accompanied by catastrophic consequences. Precise characterization of these events using Monte Carlo simulation methods is often intractable, as the number of realizations needed to witness even a single rare event can be very large. The weighted stochastic simulation algorithm (wSSA) [J. Chem. Phys. 129, 165101 (2008)] and its subsequent extension [J. Chem. Phys. 130, 174103 (2009)] alleviate this difficulty with importance sampling, which effectively biases the system toward the desired rare event. However, extensive computation coupled with substantial insight into a given system is required, as there is currently no automatic approach for choosing wSSA parameters. We present a novel modification of the wSSA—the doubly weighted SSA (dwSSA)—that makes possible a fully automated parameter selection method. Our approach uses the information-theoretic concept of cross entropy to identify parameter values yielding minimum variance rare event probability estimates. We apply the method to four examples: a pure birth process, a birth-death process, an enzymatic futile cycle, and a yeast polarization model. Our results demonstrate that the proposed method (1) enables probability estimation for a class of rare events that cannot be interrogated with the wSSA, and (2) for all examples tested, reduces the number of runs needed to achieve comparable accuracy by multiple orders of magnitude. For a particular rare event in the yeast polarization model, our method transforms a projected simulation time of 600 years to three hours. Furthermore, by incorporating information-theoretic principles, our approach provides a framework for the development of more sophisticated influencing schemes that should further improve estimation accuracy. PMID:21280690

  15. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This

  16. Online Reinforcement Learning Using a Probability Density Estimation.

    PubMed

    Agostini, Alejandro; Celaya, Enric

    2017-01-01

    Function approximation in online, incremental, reinforcement learning needs to deal with two fundamental problems: biased sampling and nonstationarity. In this kind of task, biased sampling occurs because samples are obtained from specific trajectories dictated by the dynamics of the environment and are usually concentrated in particular convergence regions, which in the long term tend to dominate the approximation in the less sampled regions. The nonstationarity comes from the recursive nature of the estimations typical of temporal difference methods. This nonstationarity has a local profile, varying not only along the learning process but also along different regions of the state space. We propose to deal with these problems using an estimation of the probability density of samples represented with a gaussian mixture model. To deal with the nonstationarity problem, we use the common approach of introducing a forgetting factor in the updating formula. However, instead of using the same forgetting factor for the whole domain, we make it dependent on the local density of samples, which we use to estimate the nonstationarity of the function at any given input point. To address the biased sampling problem, the forgetting factor applied to each mixture component is modulated according to the new information provided in the updating, rather than forgetting depending only on time, thus avoiding undesired distortions of the approximation in less sampled regions.

  17. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  18. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  19. The estimation of probable maximum precipitation: the case of Catalonia.

    PubMed

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  20. Estimating the probability for major gene Alzheimer disease

    SciTech Connect

    Farrer, L.A. Boston Univ. School of Public Health, Boston, MA ); Cupples, L.A. )

    1994-02-01

    Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted risk estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.

  1. Estimating multidimensional probability fields using the Field Estimator for Arbitrary Spaces (FiEstAS) with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Ascasibar, Yago

    2010-08-01

    The Field Estimator for Arbitrary Spaces (FiEstAS) computes the continuous probability density field underlying a given discrete data sample in multiple, non-commensurate dimensions. The algorithm works by constructing a metric-independent tessellation of the data space based on a recursive binary splitting. Individual, data-driven bandwidths are assigned to each point, scaled so that a constant “mass”M is enclosed. Kernel density estimation may then be performed for different kernel shapes, and a combination of balloon and sample point estimators is proposed as a compromise between resolution and variance. A bias correction is evaluated for the particular (yet common) case where the density is computed exactly at the locations of the data points rather than at an uncorrelated set of locations. By default, the algorithm combines a top-hat kernel with M=2.0 with the balloon estimator and applies the corresponding bias correction. These settings are shown to yield reasonable results for a simple test case, a two-dimensional ring, that illustrates the performance for oblique distributions, as well as for a six-dimensional Hernquist sphere, a fairly realistic model of the dynamical structure of stellar bulges in galaxies and dark matter haloes in cosmological N-body simulations. Results for different parameter settings are discussed in order to provide a guideline to select an optimal configuration in other cases. Source code is available upon request.

  2. Estimation of capture probabilities using generalized estimating equations and mixed effects approaches

    PubMed Central

    Akanda, Md Abdus Salam; Alpizar-Jara, Russell

    2014-01-01

    Modeling individual heterogeneity in capture probabilities has been one of the most challenging tasks in capture–recapture studies. Heterogeneity in capture probabilities can be modeled as a function of individual covariates, but correlation structure among capture occasions should be taking into account. A proposed generalized estimating equations (GEE) and generalized linear mixed modeling (GLMM) approaches can be used to estimate capture probabilities and population size for capture–recapture closed population models. An example is used for an illustrative application and for comparison with currently used methodology. A simulation study is also conducted to show the performance of the estimation procedures. Our simulation results show that the proposed quasi-likelihood based on GEE approach provides lower SE than partial likelihood based on either generalized linear models (GLM) or GLMM approaches for estimating population size in a closed capture–recapture experiment. Estimator performance is good if a large proportion of individuals are captured. For cases where only a small proportion of individuals are captured, the estimates become unstable, but the GEE approach outperforms the other methods. PMID:24772290

  3. Structural health monitoring and probability of detection estimation

    NASA Astrophysics Data System (ADS)

    Forsyth, David S.

    2016-02-01

    Structural health monitoring (SHM) methods are often based on nondestructive testing (NDT) sensors and are often proposed as replacements for NDT to lower cost and/or improve reliability. In order to take advantage of SHM for life cycle management, it is necessary to determine the Probability of Detection (POD) of the SHM system just as for traditional NDT to ensure that the required level of safety is maintained. Many different possibilities exist for SHM systems, but one of the attractive features of SHM versus NDT is the ability to take measurements very simply after the SHM system is installed. Using a simple statistical model of POD, some authors have proposed that very high rates of SHM system data sampling can result in high effective POD even in situations where an individual test has low POD. In this paper, we discuss the theoretical basis for determining the effect of repeated inspections, and examine data from SHM experiments against this framework to show how the effective POD from multiple tests can be estimated.

  4. Semi-supervised dimensionality reduction using estimated class membership probabilities

    NASA Astrophysics Data System (ADS)

    Li, Wei; Ruan, Qiuqi; Wan, Jun

    2012-10-01

    In solving pattern-recognition tasks with partially labeled training data, the semi-supervised dimensionality reduction method, which considers both labeled and unlabeled data, is preferable for improving the classification and generalization capability of the testing data. Among such techniques, graph-based semi-supervised learning methods have attracted a lot of attention due to their appealing properties in discovering discriminative structure and geometric structure of data points. Although they have achieved remarkable success, they cannot promise good performance when the size of the labeled data set is small, as a result of inaccurate class matrix variance approximated by insufficient labeled training data. In this paper, we tackle this problem by combining class membership probabilities estimated from unlabeled data and ground-truth class information associated with labeled data to more precisely characterize the class distribution. Therefore, it is expected to enhance performance in classification tasks. We refer to this approach as probabilistic semi-supervised discriminant analysis (PSDA). The proposed PSDA is applied to face and facial expression recognition tasks and is evaluated using the ORL, Extended Yale B, and CMU PIE face databases and the Cohn-Kanade facial expression database. The promising experimental results demonstrate the effectiveness of our proposed method.

  5. Toward realistic and practical ideal observer (IO) estimation for the optimization of medical imaging systems.

    PubMed

    He, Xin; Caffo, Brian S; Frey, Eric C

    2008-10-01

    The ideal observer (IO) employs complete knowledge of the available data statistics and sets an upper limit on observer performance on a binary classification task. However, the IO test statistic cannot be calculated analytically, except for cases where object statistics are extremely simple. Kupinski have developed a Markov chain Monte Carlo (MCMC) based technique to compute the IO test statistic for, in principle, arbitrarily complex objects and imaging systems. In this work, we applied MCMC to estimate the IO test statistic in the context of myocardial perfusion SPECT (MPS). We modeled the imaging system using an analytic SPECT projector with attenuation, distant-dependent detector-response modeling and Poisson noise statistics. The object is a family of parameterized torso phantoms with variable geometric and organ uptake parameters. To accelerate the imaging simulation process and thus enable the MCMC IO estimation, we used discretized anatomic parameters and continuous uptake parameters in defining the objects. The imaging process simulation was modeled by precomputing projections for each organ for a finite number of discretely-parameterized anatomic parameters and taking linear combinations of the organ projections based on continuous sampling of the organ uptake parameters. The proposed method greatly reduces the computational burden and allows MCMC IO estimation for a realistic MPS imaging simulation. We validated the proposed IO estimation technique by estimating IO test statistics for a large number of input objects. The properties of the first- and second-order statistics of the IO test statistics estimated using the MCMC IO estimation technique agreed well with theoretical predictions. Further, as expected, the IO had better performance, as measured by the receiver operating characteristic (ROC) curve, than the Hotelling observer. This method is developed for SPECT imaging. However, it can be adapted to any linear imaging system.

  6. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  7. Unbiased estimation of probability weighted moments and partial probability weighted moments from systematic and historical flood information and their application to estimating the GEV distribution

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.

    1990-12-01

    Unbiased estimators of probability weighted moments (PWM) and partial probability weighted moments (PPWM) from systematic and historical flood information are derived. Applications are made to estimating parameters and quantiles of the generalized extreme value (GEV) distribution. The effect of lower bound censoring, which might be deliberately introduced in practice, is also considered.

  8. Estimation of the probability of error without ground truth and known a priori probabilities. [remote sensor performance

    NASA Technical Reports Server (NTRS)

    Havens, K. A.; Minster, T. C.; Thadani, S. G.

    1976-01-01

    The probability of error or, alternatively, the probability of correct classification (PCC) is an important criterion in analyzing the performance of a classifier. Labeled samples (those with ground truth) are usually employed to evaluate the performance of a classifier. Occasionally, the numbers of labeled samples are inadequate, or no labeled samples are available to evaluate a classifier's performance; for example, when crop signatures from one area from which ground truth is available are used to classify another area from which no ground truth is available. This paper reports the results of an experiment to estimate the probability of error using unlabeled test samples (i.e., without the aid of ground truth).

  9. On estimating the fracture probability of nuclear graphite components

    NASA Astrophysics Data System (ADS)

    Srinivasan, Makuteswara

    2008-10-01

    The properties of nuclear grade graphites exhibit anisotropy and could vary considerably within a manufactured block. Graphite strength is affected by the direction of alignment of the constituent coke particles, which is dictated by the forming method, coke particle size, and the size, shape, and orientation distribution of pores in the structure. In this paper, a Weibull failure probability analysis for components is presented using the American Society of Testing Materials strength specification for nuclear grade graphites for core components in advanced high-temperature gas-cooled reactors. The risk of rupture (probability of fracture) and survival probability (reliability) of large graphite blocks are calculated for varying and discrete values of service tensile stresses. The limitations in these calculations are discussed from considerations of actual reactor environmental conditions that could potentially degrade the specification properties because of damage due to complex interactions between irradiation, temperature, stress, and variability in reactor operation.

  10. Analytical solution to transient Richards' equation with realistic water profiles for vertical infiltration and parameter estimation

    NASA Astrophysics Data System (ADS)

    Hayek, Mohamed

    2016-06-01

    A general analytical model for one-dimensional transient vertical infiltration is presented. The model is based on a combination of the Brooks and Corey soil water retention function and a generalized hydraulic conductivity function. This leads to power law diffusivity and convective term for which the exponents are functions of the inverse of the pore size distribution index. Accordingly, the proposed analytical solution covers many existing realistic models in the literature. The general form of the analytical solution is simple and it expresses implicitly the depth as function of water content and time. It can be used to model infiltration through semi-infinite dry soils with prescribed water content or flux boundary conditions. Some mathematical expressions of practical importance are also derived. The general form solution is useful for comparison between models, validation of numerical solutions and for better understanding the effect of some hydraulic parameters. Based on the analytical expression, a complete inverse procedure which allows the estimation of the hydraulic parameters from water content measurements is presented.

  11. A new parametric method of estimating the joint probability density

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2017-04-01

    We present simple parametric methods that overcome major limitations of the literature on joint/marginal density estimation. In doing so, we do not assume any form of marginal or joint distribution. Furthermore, using our method, a multivariate density can be easily estimated if we know only one of the marginal densities. We apply our methods to financial data.

  12. A CONDITIONAL PROBABILITY APPROACH FOR ANALYZING SURVEY DATA TO ESTIMATE PROBABILITY OF IMPAIRMENT

    EPA Science Inventory

    A question that arises is how can survey data, collected with a random design, provide an initial screening for identifying unsampled areas that are likely to have biological impairment? A random sampling design provides estimates of relative fraction of the population of interes...

  13. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  14. A Non-Parametric Probability Density Estimator and Some Applications.

    DTIC Science & Technology

    1984-05-01

    but she always made them easier. My children, Alison and Adam, did not always make °’ things easier but did keep my efforts in perspective. I love...Since one goal of this research is to develop a " handi - off" estimator, these choices will either be made a priori 24

  15. Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?

    PubMed

    Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin

    2014-08-01

    Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.

  16. Estimating background and threshold nitrate concentrations using probability graphs

    USGS Publications Warehouse

    Panno, S.V.; Kelly, W.R.; Martinsek, A.T.; Hackley, Keith C.

    2006-01-01

    Because of the ubiquitous nature of anthropogenic nitrate (NO 3-) in many parts of the world, determining background concentrations of NO3- in shallow ground water from natural sources is probably impossible in most environments. Present-day background must now include diffuse sources of NO3- such as disruption of soils and oxidation of organic matter, and atmospheric inputs from products of combustion and evaporation of ammonia from fertilizer and livestock waste. Anomalies can be defined as NO3- derived from nitrogen (N) inputs to the environment from anthropogenic activities, including synthetic fertilizers, livestock waste, and septic effluent. Cumulative probability graphs were used to identify threshold concentrations separating background and anomalous NO3-N concentrations and to assist in the determination of sources of N contamination for 232 spring water samples and 200 well water samples from karst aquifers. Thresholds were 0.4, 2.5, and 6.7 mg/L for spring water samples, and 0.1, 2.1, and 17 mg/L for well water samples. The 0.4 and 0.1 mg/L values are assumed to represent thresholds for present-day precipitation. Thresholds at 2.5 and 2.1 mg/L are interpreted to represent present-day background concentrations of NO3-N. The population of spring water samples with concentrations between 2.5 and 6.7 mg/L represents an amalgam of all sources of NO3- in the ground water basins that feed each spring; concentrations >6.7 mg/L were typically samples collected soon after springtime application of synthetic fertilizer. The 17 mg/L threshold (adjusted to 15 mg/L) for well water samples is interpreted as the level above which livestock wastes dominate the N sources. Copyright ?? 2006 The Author(s).

  17. Effect of Prior Probability Quality on Biased Time-Delay Estimation

    PubMed Central

    Byram, Brett C.; Trahey, Gregg E.; Palmeri, Mark L.

    2012-01-01

    When properly constructed, biased estimators are known to produce lower mean-square errors than unbiased estimators. A biased estimator for the problem of ultrasound time-delay estimation was recently proposed. The proposed estimator incorporates knowledge of adjacent displacement estimates into the final estimate of a displacement. This is accomplished by using adjacent estimates to create a prior probability on the current estimate. Theory and simulations are used to investigate how the prior probability impacts the final estimate. The results show that with estimation quality on the order of the Cramer-Rao lower bound at adjacent locations, the local estimate in question should generally exceed the Cramer-Rao lower-bound limitations on performance of an unbiased estimator. The results as a whole provide additional confidence for the proposed estimator. PMID:22724313

  18. The Estimation of Probability of Extreme Events for Small Samples

    NASA Astrophysics Data System (ADS)

    Pisarenko, V. F.; Rodkin, M. V.

    2017-02-01

    The most general approach to the study of rare extreme events is based on the extreme value theory. The fundamental General Extreme Value Distribution lies in the basis of this theory serving as the limit distribution for normalized maxima. It depends on three parameters. Usually the method of maximum likelihood (ML) is used for the estimation that possesses well-known optimal asymptotic properties. However, this method works efficiently only when sample size is large enough ( 200-500), whereas in many applications the sample size does not exceed 50-100. For such sizes, the advantage of the ML method in efficiency is not guaranteed. We have found that for this situation the method of statistical moments (SM) works more efficiently over other methods. The details of the estimation for small samples are studied. The SM is applied to the study of extreme earthquakes in three large virtual seismic zones, representing the regime of seismicity in subduction zones, intracontinental regime of seismicity, and the regime in mid-ocean ridge zones. The 68%-confidence domains for pairs of parameter (ξ, σ) and (σ, μ) are derived.

  19. Estimation of the size of a closed population when capture probabilities vary among animals

    USGS Publications Warehouse

    Burnham, K.P.; Overton, W.S.

    1978-01-01

    A model which allows capture probabilities to vary by individuals is introduced for multiple recapture studies n closed populations. The set of individual capture probabilities is modelled as a random sample from an arbitrary probability distribution over the unit interval. We show that the capture frequencies are a sufficient statistic. A nonparametric estimator of population size is developed based on the generalized jackknife; this estimator is found to be a linear combination of the capture frequencies. Finally, tests of underlying assumptions are presented.

  20. Estimating the Probability of a Diffusing Target Encountering a Stationary Sensor.

    DTIC Science & Technology

    1985-07-01

    7 RD-R1577 6- 44 ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET i/i ENCOUNTERING R STATIONARY SENSOR(U) NAVAL POSTGRADUATE U SCHOOL MONTEREY CA...8217,: *.:.; - -*.. ,’.-,:;;’.’.. ’,. ,. .*.’.- 4 6 6- ..- .-,,.. : .-.;.- -. NPS55-85-013 NAVAL POSTGRADUATE SCHOOL Monterey, California ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET...PROBABILITY OF A DIFFUSING Technical TARGET ENCOUNTERING A STATIONARY SENSOR S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(@) S. CONTRACT OR GRANT NUMBER(a

  1. Overfitting, generalization, and MSE in class probability estimation with high-dimensional data.

    PubMed

    Kim, Kyung In; Simon, Richard

    2014-03-01

    Accurate class probability estimation is important for medical decision making but is challenging, particularly when the number of candidate features exceeds the number of cases. Special methods have been developed for nonprobabilistic classification, but relatively little attention has been given to class probability estimation with numerous candidate variables. In this paper, we investigate overfitting in the development of regularized class probability estimators. We investigate the relation between overfitting and accurate class probability estimation in terms of mean square error. Using simulation studies based on real datasets, we found that some degree of overfitting can be desirable for reducing mean square error. We also introduce a mean square error decomposition for class probability estimation that helps clarify the relationship between overfitting and prediction accuracy.

  2. Probability Estimation of CO2 Leakage Through Faults at Geologic Carbon Sequestration Sites

    SciTech Connect

    Zhang, Yingqi; Oldenburg, Curt; Finsterle, Stefan; Jordan, Preston; Zhang, Keni

    2008-11-01

    Leakage of CO{sub 2} and brine along faults at geologic carbon sequestration (GCS) sites is a primary concern for storage integrity. The focus of this study is on the estimation of the probability of leakage along faults or fractures. This leakage probability is controlled by the probability of a connected network of conduits existing at a given site, the probability of this network encountering the CO{sub 2} plume, and the probability of this network intersecting environmental resources that may be impacted by leakage. This work is designed to fit into a risk assessment and certification framework that uses compartments to represent vulnerable resources such as potable groundwater, health and safety, and the near-surface environment. The method we propose includes using percolation theory to estimate the connectivity of the faults, and generating fuzzy rules from discrete fracture network simulations to estimate leakage probability. By this approach, the probability of CO{sub 2} escaping into a compartment for a given system can be inferred from the fuzzy rules. The proposed method provides a quick way of estimating the probability of CO{sub 2} or brine leaking into a compartment. In addition, it provides the uncertainty range of the estimated probability.

  3. Bayesian Modal Estimation of the Four-Parameter Item Response Model in Real, Realistic, and Idealized Data Sets.

    PubMed

    Waller, Niels G; Feuerstahler, Leah

    2017-03-17

    In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).

  4. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).

  5. Experimental estimation of the photons visiting probability profiles in time-resolved diffuse reflectance measurement.

    PubMed

    Sawosz, P; Kacprzak, M; Weigl, W; Borowska-Solonynko, A; Krajewski, P; Zolek, N; Ciszek, B; Maniewski, R; Liebert, A

    2012-12-07

    A time-gated intensified CCD camera was applied for time-resolved imaging of light penetrating in an optically turbid medium. Spatial distributions of light penetration probability in the plane perpendicular to the axes of the source and the detector were determined at different source positions. Furthermore, visiting probability profiles of diffuse reflectance measurement were obtained by the convolution of the light penetration distributions recorded at different source positions. Experiments were carried out on homogeneous phantoms, more realistic two-layered tissue phantoms based on the human skull filled with Intralipid-ink solution and on cadavers. It was noted that the photons visiting probability profiles depend strongly on the source-detector separation, the delay between the laser pulse and the photons collection window and the complex tissue composition of the human head.

  6. Experimental estimation of the photons visiting probability profiles in time-resolved diffuse reflectance measurement

    NASA Astrophysics Data System (ADS)

    Sawosz, P.; Kacprzak, M.; Weigl, W.; Borowska-Solonynko, A.; Krajewski, P.; Zolek, N.; Ciszek, B.; Maniewski, R.; Liebert, A.

    2012-12-01

    A time-gated intensified CCD camera was applied for time-resolved imaging of light penetrating in an optically turbid medium. Spatial distributions of light penetration probability in the plane perpendicular to the axes of the source and the detector were determined at different source positions. Furthermore, visiting probability profiles of diffuse reflectance measurement were obtained by the convolution of the light penetration distributions recorded at different source positions. Experiments were carried out on homogeneous phantoms, more realistic two-layered tissue phantoms based on the human skull filled with Intralipid-ink solution and on cadavers. It was noted that the photons visiting probability profiles depend strongly on the source-detector separation, the delay between the laser pulse and the photons collection window and the complex tissue composition of the human head.

  7. Variable selection in large margin classifier-based probability estimation with high-dimensional predictors.

    PubMed

    Shin, Seung Jun; Wu, Yichao

    2014-07-01

    This is a discussion of the papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler.

  8. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples

  9. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    PubMed

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  10. Photo-Realistic Statistical Skull Morphotypes: New Exemplars for Ancestry and Sex Estimation in Forensic Anthropology.

    PubMed

    Caple, Jodi; Stephan, Carl N

    2016-12-01

    Graphic exemplars of cranial sex and ancestry are essential to forensic anthropology for standardizing casework, training analysts, and communicating group trends. To date, graphic exemplars have comprised hand-drawn sketches, or photographs of individual specimens, which risks bias/subjectivity. Here, we performed quantitative analysis of photographic data to generate new photo-realistic and objective exemplars of skull form. Standardized anterior and left lateral photographs of skulls for each sex were analyzed in the computer graphics program Psychomorph for the following groups: South African Blacks, South African Whites, American Blacks, American Whites, and Japanese. The average cranial form was calculated for each photographic view, before the color information for every individual was warped to the average form and combined to produce statistical averages. These mathematically derived exemplars-and their statistical exaggerations or extremes-retain the high-resolution detail of the original photographic dataset, making them the ideal casework and training reference standards.

  11. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  12. Generalizations and Extensions of the Probability of Superiority Effect Size Estimator

    ERIC Educational Resources Information Center

    Ruscio, John; Gera, Benjamin Lee

    2013-01-01

    Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…

  13. Improving quality of sample entropy estimation for continuous distribution probability functions

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2016-05-01

    Entropy is a one of the key parameters characterizing state of system in statistical physics. Although, the entropy is defined for systems described by discrete and continuous probability distribution function (PDF), in numerous applications the sample entropy is estimated by a histogram, which, in fact, denotes that the continuous PDF is represented by a set of probabilities. Such a procedure may lead to ambiguities and even misinterpretation of the results. Within this paper, two possible general algorithms based on continuous PDF estimation are discussed in the application to the Shannon and Tsallis entropies. It is shown that the proposed algorithms may improve entropy estimation, particularly in the case of small data sets.

  14. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  15. Estimating transition probabilities for stage-based population projection matrices using capture-recapture data

    USGS Publications Warehouse

    Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.

    1992-01-01

    In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).

  16. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  17. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The

  18. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  19. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  20. Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.

    2016-09-01

    The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.

  1. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  2. Impaired probability estimation and decision-making in pathological gambling poker players.

    PubMed

    Linnet, Jakob; Frøslev, Mette; Ramsgaard, Stine; Gebauer, Line; Mouridsen, Kim; Wohlert, Victoria

    2012-03-01

    Poker has gained tremendous popularity in recent years, increasing the risk for some individuals to develop pathological gambling. Here, we investigated cognitive biases in a computerized two-player poker task against a fictive opponent, among 12 pathological gambling poker players (PGP), 10 experienced poker players (ExP), and 11 inexperienced poker players (InP). Players were compared on probability estimation and decision-making with the hypothesis that ExP would have significantly lower cognitive biases than PGP and InP, and that the groups could be differentiated based on their cognitive bias styles. The results showed that ExP had a significantly lower average error margin in probability estimation than PGP and InP, and that PGP played hands with lower winning probability than ExP. Binomial logistic regression showed perfect differentiation (100%) between ExP and PGP, and 90.5% classification accuracy between ExP and InP. Multinomial logistic regression showed an overall classification accuracy of 23 out of 33 (69.7%) between the three groups. The classification accuracy of ExP was higher than that of PGP and InP due to the similarities in probability estimation and decision-making between PGP and InP. These impairments in probability estimation and decision-making of PGP may have implications for assessment and treatment of cognitive biases in pathological gambling poker players.

  3. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  4. Incorporating diverse data and realistic complexity into demographic estimation procedures for sea otters.

    PubMed

    Tinker, M Tim; Doak, Daniel F; Estes, James A; Hatfield, Brian B; Staedler, Michelle M; Bodkin, James L

    2006-12-01

    Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks.

  5. Incorporating diverse data and realistic complexity into demographic estimation procedures for sea otters

    USGS Publications Warehouse

    Tinker, M. Timothy; Doak, Daniel F.; Estes, James A.; Hatfield, Brian B.; Staedler, Michelle M.; Bodkin, James L

    2006-01-01

    Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.

  6. Differential Survival in Europe and the United States: Estimates Based on Subjective Probabilities of Survival

    PubMed Central

    Delavande, Adeline; Rohwedder, Susann

    2013-01-01

    Cross-country comparisons of differential survival by socioeconomic status (SES) are useful in many domains. Yet, to date, such studies have been rare. Reliably estimating differential survival in a single country has been challenging because it requires rich panel data with a large sample size. Cross-country estimates have proven even more difficult because the measures of SES need to be comparable internationally. We present an alternative method for acquiring information on differential survival by SES. Rather than using observations of actual survival, we relate individuals’ subjective probabilities of survival to SES variables in cross section. To show that subjective survival probabilities are informative proxies for actual survival when estimating differential survival, we compare estimates of differential survival based on actual survival with estimates based on subjective probabilities of survival for the same sample. The results are remarkably similar. We then use this approach to compare differential survival by SES for 10 European countries and the United States. Wealthier people have higher survival probabilities than those who are less wealthy, but the strength of the association differs across countries. Nations with a smaller gradient appear to be Belgium, France, and Italy, while the United States, England, and Sweden appear to have a larger gradient. PMID:22042664

  7. Bayes' theorem and diagnostic tests in neuropsychology: interval estimates for post-test probabilities.

    PubMed

    Crawford, John R; Garthwaite, Paul H; Betkowska, Karolina

    2009-05-01

    Most neuropsychologists are aware that, given the specificity and sensitivity of a test and an estimate of the base rate of a disorder, Bayes' theorem can be used to provide a post-test probability for the presence of the disorder given a positive test result (and a post-test probability for the absence of a disorder given a negative result). However, in the standard application of Bayes' theorem the three quantities (sensitivity, specificity, and the base rate) are all treated as fixed, known quantities. This is very unrealistic as there may be considerable uncertainty over these quantities and therefore even greater uncertainty over the post-test probability. Methods of obtaining interval estimates on the specificity and sensitivity of a test are set out. In addition, drawing and extending upon work by Mossman and Berger (2001), a Monte Carlo method is used to obtain interval estimates for post-test probabilities. All the methods have been implemented in a computer program, which is described and made available (www.abdn.ac.uk/~psy086/dept/BayesPTP.htm). When objective data on the base rate are lacking (or have limited relevance to the case at hand) the program elicits opinion for the pre-test probability.

  8. Parameter Estimation for Binary Neutron-star Coalescences with Realistic Noise during the Advanced LIGO Era

    NASA Astrophysics Data System (ADS)

    Berry, Christopher P. L.; Mandel, Ilya; Middleton, Hannah; Singer, Leo P.; Urban, Alex L.; Vecchio, Alberto; Vitale, Salvatore; Cannon, Kipp; Farr, Ben; Farr, Will M.; Graff, Philip B.; Hanna, Chad; Haster, Carl-Johan; Mohapatra, Satya; Pankow, Chris; Price, Larry R.; Sidery, Trevor; Veitch, John

    2015-05-01

    Advanced ground-based gravitational-wave (GW) detectors begin operation imminently. Their intended goal is not only to make the first direct detection of GWs, but also to make inferences about the source systems. Binary neutron-star mergers are among the most promising sources. We investigate the performance of the parameter-estimation (PE) pipeline that will be used during the first observing run of the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in 2015: we concentrate on the ability to reconstruct the source location on the sky, but also consider the ability to measure masses and the distance. Accurate, rapid sky localization is necessary to alert electromagnetic (EM) observatories so that they can perform follow-up searches for counterpart transient events. We consider PE accuracy in the presence of non-stationary, non-Gaussian noise. We find that the character of the noise makes negligible difference to the PE performance at a given signal-to-noise ratio. The source luminosity distance can only be poorly constrained, since the median 90% (50%) credible interval scaled with respect to the true distance is 0.85 (0.38). However, the chirp mass is well measured. Our chirp-mass estimates are subject to systematic error because we used gravitational-waveform templates without component spin to carry out inference on signals with moderate spins, but the total error is typically less than {{10}-3} {{M}⊙ }. The median 90% (50%) credible region for sky localization is ˜ 600 {{deg }2} (˜ 150 {{deg }2}), with 3% (30%) of detected events localized within 100 {{deg }2}. Early aLIGO, with only two detectors, will have a sky-localization accuracy for binary neutron stars of hundreds of square degrees; this makes EM follow-up challenging, but not impossible.

  9. Competing events influence estimated survival probability: when is Kaplan-Meier analysis appropriate?

    PubMed

    Biau, David Jean; Latouche, Aurélien; Porcher, Raphaël

    2007-09-01

    The Kaplan-Meier estimator is the current method for estimating the probability of an event to occur with time in orthopaedics. However, the Kaplan-Meier estimator was designed to estimate the probability of an event that eventually will occur for all patients, ie, death, and this does not hold for other outcomes. For example, not all patients will experience hip arthroplasty loosening because some may die first, and some may have their implant removed to treat infection or recurrent hip dislocation. Such events that preclude the observation of the event of interest are called competing events. We suggest the Kaplan-Meier estimator is inappropriate in the presence of competing events and show that it overestimates the probability of the event of interest to occur with time. The cumulative incidence estimator is an alternative approach to Kaplan-Meier in situations where competing risks are likely. Three common situations include revision for implant loosening in the long-term followup of arthroplasties or implant failure in the context of limb-salvage surgery or femoral neck fracture.

  10. Estimating Promotion Probabilities of Navy Officers Based on Individual’s Attributes and Other Global Effects

    DTIC Science & Technology

    2012-09-01

    incorporates macro economic and policy level information. In the first step the conditional probabilities of staying or leaving the Navy are estimated...accommodates time dependent information, cohort information, censoring problems with the data as well as incorporating macro economic and policy level ...1 Introducing the Individual Level Information (Covariates

  11. Mediators of the Availability Heuristic in Probability Estimates of Future Events.

    ERIC Educational Resources Information Center

    Levi, Ariel S.; Pryor, John B.

    Individuals often estimate the probability of future events by the ease with which they can recall or cognitively construct relevant instances. Previous research has not precisely identified the cognitive processes mediating this "availability heuristic." Two potential mediators (imagery of the event, perceived reasons or causes for the…

  12. Estimating the probability of identity among genotypes in natural populations: cautions and guidelines.

    PubMed

    Waits, L P; Luikart, G; Taberlet, P

    2001-01-01

    Individual identification using DNA fingerprinting methods is emerging as a critical tool in conservation genetics and molecular ecology. Statistical methods that estimate the probability of sampling identical genotypes using theoretical equations generally assume random associations between alleles within and among loci. These calculations are probably inaccurate for many animal and plant populations due to population substructure. We evaluated the accuracy of a probability of identity (P(ID)) estimation by comparing the observed and expected P(ID), using large nuclear DNA microsatellite data sets from three endangered species: the grey wolf (Canis lupus), the brown bear (Ursus arctos), and the Australian northern hairy-nosed wombat (Lasiorinyus krefftii). The theoretical estimates of P(ID) were consistently lower than the observed P(ID), and can differ by as much as three orders of magnitude. To help researchers and managers avoid potential problems associated with this bias, we introduce an equation for P(ID) between sibs. This equation provides an estimator that can be used as a conservative upper bound for the probability of observing identical multilocus genotypes between two individuals sampled from a population. We suggest computing the actual observed P(ID) when possible and give general guidelines for the number of codominant and dominant marker loci required to achieve a reasonably low P(ID) (e.g. 0.01-0.0001).

  13. Estimating the Probability of Being the Best System: A Generalized Method and Nonparametric Hypothesis Test

    DTIC Science & Technology

    2013-03-01

    Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology Air...University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree of Master of Science in Operations ...to estimate these unknown multinomial success probabilities, , for each of the systems [17]. Bechhofer and Sobel [18] made use of multinomial

  14. Transition probability estimates for non-Markov multi-state models.

    PubMed

    Titman, Andrew C

    2015-12-01

    Non-parametric estimation of the transition probabilities in multi-state models is considered for non-Markov processes. Firstly, a generalization of the estimator of Pepe et al., (1991) (Statistics in Medicine) is given for a class of progressive multi-state models based on the difference between Kaplan-Meier estimators. Secondly, a general estimator for progressive or non-progressive models is proposed based upon constructed univariate survival or competing risks processes which retain the Markov property. The properties of the estimators and their associated standard errors are investigated through simulation. The estimators are demonstrated on datasets relating to survival and recurrence in patients with colon cancer and prothrombin levels in liver cirrhosis patients.

  15. Estimating site occupancy and detection probability parameters for meso- and large mammals in a coastal eosystem

    USGS Publications Warehouse

    O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.

    2006-01-01

    Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.

  16. Estimating Transitional Probabilities with Cross-Sectional Data to Assess Smoking Behavior Progression: A Validation Analysis

    PubMed Central

    Chen, Xinguang; Lin, Feng

    2013-01-01

    Background and objective New analytical tools are needed to advance tobacco research, tobacco control planning and tobacco use prevention practice. In this study, we validated a method to extract information from cross-sectional survey for quantifying population dynamics of adolescent smoking behavior progression. Methods With a 3-stage 7-path model, probabilities of smoking behavior progression were estimated employing the Probabilistic Discrete Event System (PDES) method and the cross-sectional data from 1997-2006 National Survey on Drug Use and Health (NSDUH). Validity of the PDES method was assessed using data from the National Longitudinal Survey of Youth 1997 and trends in smoking transition covering the period during which funding for tobacco control was cut substantively in 2003 in the United States. Results Probabilities for all seven smoking progression paths were successfully estimated with the PDES method and the NSDUH data. The absolute difference in the estimated probabilities between the two approaches varied from 0.002 to 0.076 (p>0.05 for all) and were highly correlated with each other (R2=0.998, p<0.01). Changes in the estimated transitional probabilities across the 1997-2006 reflected the 2003 funding cut for tobacco control. Conclusions The PDES method has validity in quantifying population dynamics of smoking behavior progression with cross-sectional survey data. The estimated transitional probabilities add new evidence supporting more advanced tobacco research, tobacco control planning and tobacco use prevention practice. This method can be easily extended to study other health risk behaviors. PMID:25279247

  17. Estimates of annual survival probabilities for adult Florida manatees (Trichechus manatus latirostris)

    USGS Publications Warehouse

    Langtimm, C.A.; O'Shea, T.J.; Pradel, R.; Beck, C.A.

    1998-01-01

    The population dynamics of large, long-lived mammals are particularly sensitive to changes in adult survival. Understanding factors affecting survival patterns is therefore critical for developing and testing theories of population dynamics and for developing management strategies aimed at preventing declines or extinction in such taxa. Few studies have used modern analytical approaches for analyzing variation and testing hypotheses about survival probabilities in large mammals. This paper reports a detailed analysis of annual adult survival in the Florida manatee (Trichechus manatus latirostris), an endangered marine mammal, based on a mark-recapture approach. Natural and boat-inflicted scars distinctively 'marked' individual manatees that were cataloged in a computer-based photographic system. Photo-documented resightings provided 'recaptures.' Using open population models, annual adult-survival probabilities were estimated for manatees observed in winter in three areas of Florida: Blue Spring, Crystal River, and the Atlantic coast. After using goodness-of-fit tests in Program RELEASE to search for violations of the assumptions of mark-recapture analysis, survival and sighting probabilities were modeled under several different biological hypotheses with Program SURGE. Estimates of mean annual probability of sighting varied from 0.948 for Blue Spring to 0.737 for Crystal River and 0.507 for the Atlantic coast. At Crystal River and Blue Spring, annual survival probabilities were best estimated as constant over the study period at 0.96 (95% CI = 0.951-0.975 and 0.900-0.985, respectively). On the Atlantic coast, where manatees are impacted more by human activities, annual survival probabilities had a significantly lower mean estimate of 0.91 (95% CI = 0.887-0.926) and varied unpredictably over the study period. For each study area, survival did not differ between sexes and was independent of relative adult age. The high constant adult-survival probabilities estimated

  18. PIGS: improved estimates of identity-by-descent probabilities by probabilistic IBD graph sampling

    PubMed Central

    2015-01-01

    Identifying segments in the genome of different individuals that are identical-by-descent (IBD) is a fundamental element of genetics. IBD data is used for numerous applications including demographic inference, heritability estimation, and mapping disease loci. Simultaneous detection of IBD over multiple haplotypes has proven to be computationally difficult. To overcome this, many state of the art methods estimate the probability of IBD between each pair of haplotypes separately. While computationally efficient, these methods fail to leverage the clique structure of IBD resulting in less powerful IBD identification, especially for small IBD segments. We develop a hybrid approach (PIGS), which combines the computational efficiency of pairwise methods with the power of multiway methods. It leverages the IBD graph structure to compute the probability of IBD conditional on all pairwise estimates simultaneously. We show via extensive simulations and analysis of real data that our method produces a substantial increase in the number of identified small IBD segments. PMID:25860540

  19. Estimating site occupancy rates when detection probabilities are less than one

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.

    2002-01-01

    Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.

  20. On the Estimation of Detection Probabilities for Sampling Stream-Dwelling Fishes.

    SciTech Connect

    Peterson, James T.

    1999-11-01

    To examine the adequacy of fish probability of detection estimates, I examined distributional properties of survey and monitoring data for bull trout (Salvelinus confluentus), brook trout (Salvelinus fontinalis), westslope cutthroat trout (Oncorhynchus clarki lewisi), chinook salmon parr (Oncorhynchus tshawytscha), and steelhead /redband trout (Oncorhynchus mykiss spp.), from 178 streams in the Interior Columbia River Basin. Negative binomial dispersion parameters varied considerably among species and streams, but were significantly (P<0.05) positively related to fish density. Across streams, the variances in fish abundances differed greatly among species and indicated that the data for all species were overdispersed with respect to the Poisson (i.e., the variances exceeded the means). This significantly affected Poisson probability of detection estimates, which were the highest across species and were, on average, 3.82, 2.66, and 3.47 times greater than baseline values. Required sample sizes for species detection at the 95% confidence level were also lowest for the Poisson, which underestimated sample size requirements an average of 72% across species. Negative binomial and Poisson-gamma probability of detection and sample size estimates were more accurate than the Poisson and generally less than 10% from baseline values. My results indicate the Poisson and binomial assumptions often are violated, which results in probability of detection estimates that are biased high and sample size estimates that are biased low. To increase the accuracy of these estimates, I recommend that future studies use predictive distributions than can incorporate multiple sources of uncertainty or excess variance and that all distributional assumptions be explicitly tested.

  1. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  2. A maximum a posteriori probability time-delay estimation for seismic signals

    NASA Astrophysics Data System (ADS)

    Carrier, A.; Got, J.-L.

    2014-09-01

    Cross-correlation and cross-spectral time delays often exhibit strong outliers due to ambiguities or cycle jumps in the correlation function. Their number increases when signal-to-noise, signal similarity or spectral bandwidth decreases. Such outliers heavily determine the time-delay probability density function and the results of further computations (e.g. double-difference location and tomography) using these time delays. In the present research we expressed cross-correlation as a function of the squared difference between signal amplitudes and show that they are closely related. We used this difference as a cost function whose minimum is reached when signals are aligned. Ambiguities may be removed in this function by using a priori information. We propose using the traveltime difference as a priori time-delay information. By modelling the probability density function of the traveltime difference by a Cauchy distribution and the probability density function of the data (differences of seismic signal amplitudes) by a Laplace distribution we were able to find explicitly the time-delay a posteriori probability density function. The location of the maximum of this a posteriori probability density function is the maximum a posteriori time-delay estimation for earthquake signals. Using this estimation to calculate time delays for earthquakes on the south flank of Kilauea statistically improved the cross-correlation time-delay estimation for these data and resulted in successful double-difference relocation for an increased number of earthquakes. This robust time-delay estimation improves the spatiotemporal resolution of seismicity rates in the south flank of Kilauea.

  3. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  4. Optimal estimation for regression models on τ-year survival probability.

    PubMed

    Kwak, Minjung; Kim, Jinseog; Jung, Sin-Ho

    2015-01-01

    A logistic regression method can be applied to regressing the [Formula: see text]-year survival probability to covariates, if there are no censored observations before time [Formula: see text]. But if some observations are incomplete due to censoring before time [Formula: see text], then the logistic regression cannot be applied. Jung (1996) proposed to modify the score function for logistic regression to accommodate the right-censored observations. His modified score function, motivated for a consistent estimation of regression parameters, becomes a regular logistic score function if no observations are censored before time [Formula: see text]. In this article, we propose a modification of Jung's estimating function for an optimal estimation for the regression parameters in addition to consistency. We prove that the optimal estimator is more efficient than Jung's estimator. This theoretical comparison is illustrated with a real example data analysis and simulations.

  5. Using counts to simultaneously estimate abundance and detection probabilities in a salamander community

    USGS Publications Warehouse

    Dodd, C.K.; Dorazio, R.M.

    2004-01-01

    A critical variable in both ecological and conservation field studies is determining how many individuals of a species are present within a defined sampling area. Labor intensive techniques such as capture-mark-recapture and removal sampling may provide estimates of abundance, but there are many logistical constraints to their widespread application. Many studies on terrestrial and aquatic salamanders use counts as an index of abundance, assuming that detection remains constant while sampling. If this constancy is violated, determination of detection probabilities is critical to the accurate estimation of abundance. Recently, a model was developed that provides a statistical approach that allows abundance and detection to be estimated simultaneously from spatially and temporally replicated counts. We adapted this model to estimate these parameters for salamanders sampled over a six vear period in area-constrained plots in Great Smoky Mountains National Park. Estimates of salamander abundance varied among years, but annual changes in abundance did not vary uniformly among species. Except for one species, abundance estimates were not correlated with site covariates (elevation/soil and water pH, conductivity, air and water temperature). The uncertainty in the estimates was so large as to make correlations ineffectual in predicting which covariates might influence abundance. Detection probabilities also varied among species and sometimes among years for the six species examined. We found such a high degree of variation in our counts and in estimates of detection among species, sites, and years as to cast doubt upon the appropriateness of using count data to monitor population trends using a small number of area-constrained survey plots. Still, the model provided reasonable estimates of abundance that could make it useful in estimating population size from count surveys.

  6. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  7. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  8. Recent developments on the probable maximum precipitation (PMP) estimation in China

    NASA Astrophysics Data System (ADS)

    Zhan, Daojiang; Zhou, Jinshang

    1984-02-01

    This paper deals with regional and seasonal characteristics of rain storms in China which are introducing the most intensive rainfall occurrences. The paper further summarizes the techniques and practices involved for estimating the probable maximum precipitation (PMP). In consequence of inadequate streamflow data and abundance of heavy storms in China, it would be very difficult and dubious to extrapolate a frequency curve to the long return periods required for a spillway of a major structure. Apart from this, there are often dense populated areas downstream from reservoirs. Thus, in the design criterion of earth dams and/or rockfill dams (embankment) for reservoirs of major significance and also for important small dams, whose failure could result in fatalities as well as catastrophic damages, the probable maximum precipitation and probable maximum flood should be used. Thus, generalized charts of 24-hr. point-PMP have been developed.

  9. Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous

    USGS Publications Warehouse

    Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.

    2014-01-01

    Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate

  10. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis

    PubMed Central

    Chiba, Tomoaki; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group’s sales beat GM’s sales, which is a reasonable scenario. PMID:28076383

  11. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.

    PubMed

    Chiba, Tomoaki; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.

  12. How should detection probability be incorporated into estimates of relative abundance?

    USGS Publications Warehouse

    MacKenzie, D.I.; Kendall, W.L.

    2002-01-01

    Determination of the relative abundance of two populations, separated by time or space, is of interest in many ecological situations. We focus on two estimators of relative abundance, which assume that the probability that an individual is detected at least once in the survey is either equal or unequal for the two populations. We present three methods for incorporating the collected information into our inference. The first method, proposed previously, is a traditional hypothesis test for evidence that detection probabilities are unequal. However, we feel that, a priori, it is more likely that detection probabilities are actually different; hence, the burden of proof should be shifted, requiring evidence that detection probabilities are practically equivalent. The second method we present, equivalence testing, is one approach to doing so. Third, we suggest that model averaging could be used by combining the two estimators according to derived model weights. These differing approaches are applied to a mark-recapture experiment on Nuttail's cottontail rabbit (Sylvilagus nuttallii) conducted in central Oregon during 1974 and 1975, which has been previously analyzed by other authors.

  13. PIGS: improved estimates of identity-by-descent probabilities by probabilistic IBD graph sampling.

    PubMed

    Park, Danny S; Baran, Yael; Hormozdiari, Farhad; Eng, Celeste; Torgerson, Dara G; Burchard, Esteban G; Zaitlen, Noah

    2015-01-01

    Identifying segments in the genome of different individuals that are identical-by-descent (IBD) is a fundamental element of genetics. IBD data is used for numerous applications including demographic inference, heritability estimation, and mapping disease loci. Simultaneous detection of IBD over multiple haplotypes has proven to be computationally difficult. To overcome this, many state of the art methods estimate the probability of IBD between each pair of haplotypes separately. While computationally efficient, these methods fail to leverage the clique structure of IBD resulting in less powerful IBD identification, especially for small IBD segments.

  14. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  15. a Parametric Study of Eddy Current Response for Probability of Detection Estimation

    NASA Astrophysics Data System (ADS)

    Hoppe, W. C.

    2010-02-01

    In the study reported here, historical Probability of Detection (POD) data for eddy current inspections were analyzed using an extension of the "a-hat versus a" model in order to better account for known crack variables and thereby better separate system and crack factors influencing the POD parameters. Intriguing insights have been gained in the process suggesting a simple model for POD estimation. The parametric model will be presented including results of the study and suggestions for further research.

  16. Implementation of Subjective Probability Estimates in Army Intelligence Procedures: A Critical Review of Research Findings

    DTIC Science & Technology

    1980-03-01

    H. Phelps, Stanley M. Halpin, Edgar M. Johnson, and Franklin L. Moses HUMAN FACTORS TECHNICAL AREA U. S. Army Research Institute for the Behavioral...Army Technical Director Commander NOTICES DISTRIBUTION: Primatry distribution of this rewot ha been mode by ARI. PIS.. addrescorrespondence 0O~rniflll...explored by relating the psychological research on the use of subjective probability estimates with the need of Army intelli- gence analysts to

  17. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  18. Maximum a posteriori probability estimation for localizing damage using ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Flynn, Eric B.; Todd, Michael D.; Wilcox, Paul D.; Drinkwater, Bruce W.; Croxford, Anthony J.

    2011-04-01

    Presented is an approach to damage localization for guided wave structural health monitoring (GWSHM) in plate-like structures. In this mode of SHM, transducers excite and sense guided waves in order to detect and characterize the presence of damage. The premise of the presented localization approach is simple: use as the estimated damage location the point on the structure with the maximum a posteriori probability (MAP) of being the location of damage (i.e., the most probable location given a set of sensor measurements). This is accomplished by constructing a minimally-informed statistical model of the GWSHM process. Parameters of the model which are unknown, such as scattered wave amplitude, are assigned non-informative Bayesian prior distributions and averaged out of the a posteriori probability calculation. Using an ensemble of measurements from an instrumented plate with stiffening stringers, the performance of the MAP estimate is compared to that of what were found to be the two most effective previously reported algorithms. The MAP estimate proved superior in nearly all test cases and was particularly effective in localizing damage using very sparse arrays of as few as three transducers.

  19. Estimating state-transition probabilities for unobservable states using capture-recapture/resighting data

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.

    2002-01-01

    Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.

  20. Calculation of the number of Monte Carlo histories for a planetary protection probability of impact estimation

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack

    2016-07-01

    Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single

  1. Alternative estimate of source distribution in microbial source tracking using posterior probabilities.

    PubMed

    Greenberg, Joshua; Price, Bertram; Ware, Adam

    2010-04-01

    Microbial source tracking (MST) is a procedure used to determine the relative contributions of humans and animals to fecal microbial contamination of surface waters in a given watershed. Studies of MST methodology have focused on optimizing sampling, laboratory, and statistical analysis methods in order to improve the reliability of determining which sources contributed most to surface water fecal contaminant. The usual approach for estimating a source distribution of microbial contamination is to classify water sample microbial isolates into discrete source categories and calculate the proportion of these isolates in each source category. The set of proportions is an estimate of the contaminant source distribution. In this paper we propose and compare an alternative method for estimating a source distribution-averaging posterior probabilities of source identity across isolates. We conducted a Monte Carlo simulation covering a wide variety of watershed scenarios to compare the two methods. The results show that averaging source posterior probabilities across isolates leads to more accurate source distribution estimates than proportions that follow classification.

  2. Estimates of EPSP amplitude based on changes in motoneuron discharge rate and probability.

    PubMed

    Powers, Randall K; Türker, K S

    2010-10-01

    When motor units are discharging tonically, transient excitatory synaptic inputs produce an increase in the probability of spike occurrence and also increase the instantaneous discharge rate. Several researchers have proposed that these induced changes in discharge rate and probability can be used to estimate the amplitude of the underlying excitatory post-synaptic potential (EPSP). We tested two different methods of estimating EPSP amplitude by comparing the amplitude of simulated EPSPs with their effects on the discharge of rat hypoglossal motoneurons recorded in an in vitro brainstem slice preparation. The first estimation method (simplified-trajectory method) is based on the assumptions that the membrane potential trajectory between spikes can be approximated by a 10 mV post-spike hyperpolarization followed by a linear rise to the next spike and that EPSPs sum linearly with this trajectory. We hypothesized that this estimation method would not be accurate due to interspike variations in membrane conductance and firing threshold that are not included in the model and that an alternative method based on estimating the effective distance to threshold would provide more accurate estimates of EPSP amplitude. This second method (distance-to-threshold method) uses interspike interval statistics to estimate the effective distance to threshold throughout the interspike interval and incorporates this distance-to-threshold trajectory into a threshold-crossing model. We found that the first method systematically overestimated the amplitude of small (<5 mV) EPSPs and underestimated the amplitude of large (>5 mV EPSPs). For large EPSPs, the degree of underestimation increased with increasing background discharge rate. Estimates based on the second method were more accurate for small EPSPs than those based on the first model, but estimation errors were still large for large EPSPs. These errors were likely due to two factors: (1) the distance to threshold can only be

  3. A flexible parametric approach for estimating continuous-time inverse probability of treatment and censoring weights.

    PubMed

    Saarela, Olli; Liu, Zhihui Amy

    2016-10-15

    Marginal structural Cox models are used for quantifying marginal treatment effects on outcome event hazard function. Such models are estimated using inverse probability of treatment and censoring (IPTC) weighting, which properly accounts for the impact of time-dependent confounders, avoiding conditioning on factors on the causal pathway. To estimate the IPTC weights, the treatment assignment mechanism is conventionally modeled in discrete time. While this is natural in situations where treatment information is recorded at scheduled follow-up visits, in other contexts, the events specifying the treatment history can be modeled in continuous time using the tools of event history analysis. This is particularly the case for treatment procedures, such as surgeries. In this paper, we propose a novel approach for flexible parametric estimation of continuous-time IPTC weights and illustrate it in assessing the relationship between metastasectomy and mortality in metastatic renal cell carcinoma patients. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Estimation of probable maximum precipitation for catchments in eastern India by a generalized method

    NASA Astrophysics Data System (ADS)

    Rakhecha, P. R.; Mandal, B. N.; Kulkarni, A. K.; Deshpande, N. R.

    1995-03-01

    A generalized method to estimate the probable maximum precipitation (PMP) has been developed for catchments in eastern India (80° E, 18° N) by pooling together all the major rainstorms that have occurred in this area. The areal raindepths of these storms are normalized for factors such as storm dew point temperature, distance of the storm from the coast, topographic effects and any intervening mountain barriers between the storm area and the moisture source. The normalized values are then applied, with appropriate adjustment factors in estimating PMP raindepths, to the Subarnarekha river catchment (upto the Chandil dam site) with an area of 5663 km2. The PMP rainfall for 1, 2 and 3 days were found to be roughly 53 cm, 78 cm and 98 cm, respectively. It is expected that the application of the generalized method proposed here will give more reliable estimates of PMP for different duration rainfall events.

  5. Tips for Teachers of Evidence-based Medicine: Clinical Prediction Rules (CPRs) and Estimating Pretest Probability

    PubMed Central

    McGinn, Thomas; Jervis, Ramiro; Wisnivesky, Juan; Keitz, Sheri

    2008-01-01

    Background Clinical prediction rules (CPR) are tools that clinicians can use to predict the most likely diagnosis, prognosis, or response to treatment in a patient based on individual characteristics. CPRs attempt to standardize, simplify, and increase the accuracy of clinicians’ diagnostic and prognostic assessments. The teaching tips series is designed to give teachers advice and materials they can use to attain specific educational objectives. Educational Objectives In this article, we present 3 teaching tips aimed at helping clinical learners use clinical prediction rules and to more accurately assess pretest probability in every day practice. The first tip is designed to demonstrate variability in physician estimation of pretest probability. The second tip demonstrates how the estimate of pretest probability influences the interpretation of diagnostic tests and patient management. The third tip exposes learners to various examples and different types of Clinical Prediction Rules (CPR) and how to apply them in practice. Pilot Testing We field tested all 3 tips with 16 learners, a mix of interns and senior residents. Teacher preparatory time was approximately 2 hours. The field test utilized a board and a data projector; 3 handouts were prepared. The tips were felt to be clear and the educational objectives reached. Potential teaching pitfalls were identified. Conclusion Teaching with these tips will help physicians appreciate the importance of applying evidence to their every day decisions. In 2 or 3 short teaching sessions, clinicians can also become familiar with the use of CPRs in applying evidence consistently in everyday practice. PMID:18491194

  6. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  7. A generalised technique for the estimation of probable maximum precipitation in India

    NASA Astrophysics Data System (ADS)

    Rakhecha, P. R.; Kennedy, M. R.

    1985-06-01

    In this paper a version of a generalised method of estimating probable maximum precipitation (PMP) is applied to the catchments of four large dams in India. The value of a secure dam is high both in terms of human life and in economic terms. Reliable estimates of PMP are required in estimating the design flood for spillways of large earth and rockfill dams. Estimates of PMP obtained using the traditional method of moisture maximisation and storm transposition can be unreliable as highly efficient rain storms may not be represented in the rainfall records of an area. Generalised methods of (calculating) PMP are used to obtain reliable estimates of PMP and also to give estimates which are consistent over a region. This is done by pooling together all the rainfall data from a very large area. The rainfall depths are normalised for such factors as storm dew-point temperature, distance of the storm from the coast, topographic effects and any intervening mountain barriers between the rainfall area and the moisture source. These normalised values can then be applied to any individual catchment, with the appropriate adjustment factors.

  8. Empirical comparison of uniform and non-uniform probability sampling for estimating numbers of red-cockaded woodpecker colonies

    USGS Publications Warehouse

    Geissler, P.H.; Moyer, L.M.

    1983-01-01

    Four sampling and estimation methods for estimating the number of red-cockaded woodpecker colonies on National Forests in the Southeast were compared, using samples chosen from simulated populations based on the observed sample. The methods included (1) simple random sampling without replacement using a mean per sampling unit estimator, (2) simple random sampling without replacement with a ratio per pine area estimator, (3) probability proportional to 'size' sampling with replacement, and (4) probability proportional to 'size' without replacement using Murthy's estimator. The survey sample of 274 National Forest compartments (1000 acres each) constituted a superpopulation from which simulated stratum populations were selected with probability inversely proportional to the original probability of selection. Compartments were originally sampled with probabilities proportional to the probabilities that the compartments contained woodpeckers ('size'). These probabilities were estimated with a discriminant analysis based on tree species and tree age. The ratio estimator would have been the best estimator for this survey based on the mean square error. However, if more accurate predictions of woodpecker presence had been available, Murthy's estimator would have been the best. A subroutine to calculate Murthy's estimates is included; it is computationally feasible to analyze up to 10 samples per stratum.

  9. A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.

    PubMed

    Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie

    2017-01-01

    Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 +  (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.

  10. A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma

    PubMed Central

    Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie

    2017-01-01

    Abstract Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = −2.150 +  (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740–0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684–0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS. PMID:28099353

  11. Estimating survival and breeding probability for pond-breeding amphibians: a modified robust design

    USGS Publications Warehouse

    Bailey, L.L.; Kendall, W.L.; Church, D.R.; Wilbur, H.M.

    2004-01-01

    Many studies of pond-breeding amphibians involve sampling individuals during migration to and from breeding habitats. Interpreting population processes and dynamics from these studies is difficult because (1) only a proportion of the population is observable each season, while an unknown proportion remains unobservable (e.g., non-breeding adults) and (2) not all observable animals are captured. Imperfect capture probability can be easily accommodated in capture?recapture models, but temporary transitions between observable and unobservable states, often referred to as temporary emigration, is known to cause problems in both open- and closed-population models. We develop a multistate mark?recapture (MSMR) model, using an open-robust design that permits one entry and one exit from the study area per season. Our method extends previous temporary emigration models (MSMR with an unobservable state) in two ways. First, we relax the assumption of demographic closure (no mortality) between consecutive (secondary) samples, allowing estimation of within-pond survival. Also, we add the flexibility to express survival probability of unobservable individuals (e.g., ?non-breeders?) as a function of the survival probability of observable animals while in the same, terrestrial habitat. This allows for potentially different annual survival probabilities for observable and unobservable animals. We apply our model to a relictual population of eastern tiger salamanders (Ambystoma tigrinum tigrinum). Despite small sample sizes, demographic parameters were estimated with reasonable precision. We tested several a priori biological hypotheses and found evidence for seasonal differences in pond survival. Our methods could be applied to a variety of pond-breeding species and other taxa where individuals are captured entering or exiting a common area (e.g., spawning or roosting area, hibernacula).

  12. Estimating occurrence and detection probabilities for stream-breeding salamanders in the Gulf Coastal Plain

    USGS Publications Warehouse

    Lamb, Jennifer Y.; Waddle, J. Hardin; Qualls, Carl P.

    2017-01-01

    Large gaps exist in our knowledge of the ecology of stream-breeding plethodontid salamanders in the Gulf Coastal Plain. Data describing where these salamanders are likely to occur along environmental gradients, as well as their likelihood of detection, are important for the prevention and management of amphibian declines. We used presence/absence data from leaf litter bag surveys and a hierarchical Bayesian multispecies single-season occupancy model to estimate the occurrence of five species of plethodontids across reaches in headwater streams in the Gulf Coastal Plain. Average detection probabilities were high (range = 0.432–0.942) and unaffected by sampling covariates specific to the use of litter bags (i.e., bag submergence, sampling season, in-stream cover). Estimates of occurrence probabilities differed substantially between species (range = 0.092–0.703) and were influenced by the size of the upstream drainage area and by the maximum proportion of the reach that dried. The effects of these two factors were not equivalent across species. Our results demonstrate that hierarchical multispecies models successfully estimate occurrence parameters for both rare and common stream-breeding plethodontids. The resulting models clarify how species are distributed within stream networks, and they provide baseline values that will be useful in evaluating the conservation statuses of plethodontid species within lotic systems in the Gulf Coastal Plain.

  13. Estimating superpopulation size and annual probability of breeding for pond-breeding salamanders

    USGS Publications Warehouse

    Kinkead, K.E.; Otis, D.L.

    2007-01-01

    It has long been accepted that amphibians can skip breeding in any given year, and environmental conditions act as a cue for breeding. In this paper, we quantify temporary emigration or nonbreeding probability for mole and spotted salamanders (Ambystoma talpoideum and A. maculatum). We estimated that 70% of mole salamanders may skip breeding during an average rainfall year and 90% may skip during a drought year. Spotted salamanders may be more likely to breed, with only 17% avoiding the breeding pond during an average rainfall year. We illustrate how superpopulations can be estimated using temporary emigration probability estimates. The superpopulation is the total number of salamanders associated with a given breeding pond. Although most salamanders stay within a certain distance of a breeding pond for the majority of their life spans, it is difficult to determine true overall population sizes for a given site if animals are only captured during a brief time frame each year with some animals unavailable for capture at any time during a given year. ?? 2007 by The Herpetologists' League, Inc.

  14. Modeling and estimation of stage-specific daily survival probabilities of nests

    USGS Publications Warehouse

    Stanley, T.R.

    2000-01-01

    In studies of avian nesting success, it is often of interest to estimate stage-specific daily survival probabilities of nests. When data can be partitioned by nesting stage (e.g., incubation stage, nestling stage), piecewise application of the Mayfield method or Johnsona??s method is appropriate. However, when the data contain nests where the transition from one stage to the next occurred during the interval between visits, piecewise approaches are inappropriate. In this paper, I present a model that allows joint estimation of stage-specific daily survival probabilities even when the time of transition between stages is unknown. The model allows interval lengths between visits to nests to vary, and the exact time of failure of nests does not need to be known. The performance of the model at various sample sizes and interval lengths between visits was investigated using Monte Carlo simulations, and it was found that the model performed quite well: bias was small and confidence-interval coverage was at the nominal 95% rate. A SAS program for obtaining maximum likelihood estimates of parameters, and their standard errors, is provided in the Appendix.

  15. A logistic regression equation for estimating the probability of a stream in Vermont having intermittent flow

    USGS Publications Warehouse

    Olson, Scott A.; Brouillette, Michael C.

    2006-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.

  16. Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation.

    PubMed

    Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M

    2014-12-01

    We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.

  17. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  18. Estimating the probability of allelic drop-out of STR alleles in forensic genetics.

    PubMed

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt; Morling, Niels

    2009-09-01

    In crime cases with available DNA evidence, the amount of DNA is often sparse due to the setting of the crime. In such cases, allelic drop-out of one or more true alleles in STR typing is possible. We present a statistical model for estimating the per locus and overall probability of allelic drop-out using the results of all STR loci in the case sample as reference. The methodology of logistic regression is appropriate for this analysis, and we demonstrate how to incorporate this in a forensic genetic framework.

  19. Southern California regional earthquake probability estimated from continuous GPS geodetic data

    NASA Astrophysics Data System (ADS)

    Anderson, G.

    2002-12-01

    Current seismic hazard estimates are primarily based on seismic and geologic data, but geodetic measurements from large, dense arrays such as the Southern California Integrated GPS Network (SCIGN) can also be used to estimate earthquake probabilities and seismic hazard. Geodetically-derived earthquake probability estimates are particularly important in regions with poorly-constrained fault slip rates. In addition, they are useful because such estimates come with well-determined error bounds. Long-term planning is underway to incorporate geodetic data in the next generation of United States national seismic hazard maps, and techniques for doing so need further development. I present a new method for estimating the expected rates of earthquakes using strain rates derived from geodetic station velocities. I compute the strain rates using a new technique devised by Y. Hsu and M. Simons [Y. Hsu and M. Simons, pers. comm.], which computes the horizontal strain rate tensor ( {˙ {ɛ}}) at each node of a pre-defined regular grid, using all geodetic velocities in the data set weighted by distance and estimated uncertainty. In addition, they use a novel weighting to handle the effects of station distribution: they divide the region covered by the geodetic network into Voronoi cells using the station locations and weight each station's contribution to {˙ {ɛ}} by the area of the Voronoi cell centered at that station. I convert {˙ {ɛ}} into the equivalent seismic moment rate density (˙ {M}) using the method of \\textit{Savage and Simpson} [1997] and maximum seismogenic depths estimated from regional seismicity; ˙ {M} gives the expected rate of seismic moment release in a region, based on the geodetic strain rates. Assuming the seismicity in the given region follows a Gutenberg-Richter relationship, I convert ˙ {M} to an expected rate of earthquakes of a given magnitude. I will present results of a study applying this method to data from the SCIGN array to estimate

  20. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a

  1. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  2. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2010-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish .

  3. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil

    2016-04-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a innovative statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use, accurate and can be applied to any region and river. Varouchakis, E. A., Giannakis, G. V., Lilli, M. A., Ioannidou, E., Nikolaidis, N. P., and Karatzas, G. P.: Development of a statistical tool for the estimation of riverbank erosion probability, SOIL (EGU), in print, 2016.

  4. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  5. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage.

  6. Bayesian estimation of the probability of asbestos exposure from lung fiber counts.

    PubMed

    Weichenthal, Scott; Joseph, Lawrence; Bélisle, Patrick; Dufresne, André

    2010-06-01

    Asbestos exposure is a well-known risk factor for various lung diseases, and when they occur, workmen's compensation boards need to make decisions concerning the probability the cause is work related. In the absence of a definitive work history, measures of short and long asbestos fibers as well as counts of asbestos bodies in the lung can be used as diagnostic tests for asbestos exposure. Typically, data from one or more lung samples are available to estimate the probability of asbestos exposure, often by comparing the values with those from a reference nonexposed population. As there is no gold standard measure, we explore a variety of latent class models that take into account the mixed discrete/continuous nature of the data, that each subject may provide data from more than one lung sample, and that the within-subject results across different samples may be correlated. Our methods can be useful to compensation boards in providing individual level probabilities of exposure based on available data, to researchers who are studying the test properties for the various measures used in this area, and more generally, to other test situations with similar data structure.

  7. Towards Practical, Real-Time Estimation of Spatial Aftershock Probabilities: A Feasibility Study in Earthquake Hazard

    NASA Astrophysics Data System (ADS)

    Morrow, P.; McCloskey, J.; Steacy, S.

    2001-12-01

    It is now widely accepted that the goal of deterministic earthquake prediction is unattainable in the short term and may even be forbidden by nonlinearity in the generating dynamics. This nonlinearity does not, however, preclude the estimation of earthquake probability and, in particular, how this probability might change in space and time; earthquake hazard estimation might be possible in the absence of earthquake prediction. Recently, there has been a major development in the understanding of stress triggering of earthquakes which allows accurate calculation of the spatial variation of aftershock probability following any large earthquake. Over the past few years this Coulomb stress technique (CST) has been the subject of intensive study in the geophysics literature and has been extremely successful in explaining the spatial distribution of aftershocks following several major earthquakes. The power of current micro-computers, the great number of local, telemetered seismic networks, the rapid acquisition of data from satellites coupled with the speed of modern telecommunications and data transfer all mean that it may be possible that these new techniques could be applied in a forward sense. In other words, it is theoretically possible today to make predictions of the likely spatial distribution of aftershocks in near-real-time following a large earthquake. Approximate versions of such predictions could be available within, say, 0.1 days after the mainshock and might be continually refined and updated over the next 100 days. The European Commission has recently provided funding for a project to assess the extent to which it is currently possible to move CST predictions into a practically useful time frame so that low-confidence estimates of aftershock probability might be made within a few hours of an event and improved in near-real-time, as data of better quality become available over the following days to tens of days. Specifically, the project aims to assess the

  8. Probability density estimation using isocontours and isosurfaces: applications to information-theoretic image registration.

    PubMed

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2009-03-01

    We present a new, geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels, and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc. under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume datasets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows) which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation.

  9. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  10. Probability estimation with machine learning methods for dichotomous and multicategory outcome: applications.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Diener, Hans-Christian; Holste, Theresa; Weimar, Christian; König, Inke R; Ziegler, Andreas

    2014-07-01

    Machine learning methods are applied to three different large datasets, all dealing with probability estimation problems for dichotomous or multicategory data. Specifically, we investigate k-nearest neighbors, bagged nearest neighbors, random forests for probability estimation trees, and support vector machines with the kernels of Bessel, linear, Laplacian, and radial basis type. Comparisons are made with logistic regression. The dataset from the German Stroke Study Collaboration with dichotomous and three-category outcome variables allows, in particular, for temporal and external validation. The other two datasets are freely available from the UCI learning repository and provide dichotomous outcome variables. One of them, the Cleveland Clinic Foundation Heart Disease dataset, uses data from one clinic for training and from three clinics for external validation, while the other, the thyroid disease dataset, allows for temporal validation by separating data into training and test data by date of recruitment into study. For dichotomous outcome variables, we use receiver operating characteristics, areas under the curve values with bootstrapped 95% confidence intervals, and Hosmer-Lemeshow-type figures as comparison criteria. For dichotomous and multicategory outcomes, we calculated bootstrap Brier scores with 95% confidence intervals and also compared them through bootstrapping. In a supplement, we provide R code for performing the analyses and for random forest analyses in Random Jungle, version 2.1.0. The learning machines show promising performance over all constructed models. They are simple to apply and serve as an alternative approach to logistic or multinomial logistic regression analysis.

  11. Fast method for the estimation of impact probability of near-Earth objects

    NASA Astrophysics Data System (ADS)

    Vavilov, D.; Medvedev, Y.

    2014-07-01

    We propose a method to estimate the probability of collision of a celestial body with the Earth (or another major planet) at a given time moment t. Let there be a set of observations of a small body. At initial time moment T_0, a nominal orbit is defined by the least squares method. In our method, a unique coordinate system is used. It is supposed that errors of observations are related to errors of coordinates and velocities linearly and the distribution law of observation errors is normal. The unique frame is defined as follows. First of all, we fix an osculating ellipse of the body's orbit at the time moment t. The mean anomaly M in this osculating ellipse is a coordinate of the introduced system. The spatial coordinate ξ is perpendicular to the plane which contains the fixed ellipse. η is a spatial coordinate, too, and our axes satisfy the right-hand rule. The origin of ξ and η corresponds to the given M point on the ellipse. The components of the velocity are the corresponding derivatives of dotξ, dotη, dot{M}. To calculate the probability of collision, we numerically integrate equations of an asteroid's motion taking into account perturbations and calculate a normal matrix N. The probability is determinated as follows: P = {|detN|^{ {1}/{2} }}/{ (2π)^3 } int_Ω e^{ - {1}/{2} x^T N x } dx where x denotes a six-dimensional vector of coordinates and velocities, Ω is the region which is occupied by the Earth, and the superscript T denotes the matrix transpose operation. To take into account a gravitational attraction of the Earth, the radius of the Earth is increased by √{1 + {v_s^2}/{v_{rel}^2} } times, where v_s is the escape velocity and v_{rel} is the small body's velocity relative to the Earth. The 6-dimensional integral is analytically integrated over the velocity components on (-∞,+∞). After that we have the 3×3 matrix N_1. That 6-dimensional integral becomes a 3-dimensional integral. To calculate it quickly we do the following. We introduce

  12. Estimating Probabilities of Peptide Database Identifications to LC-FTICR-MS Observations

    SciTech Connect

    Anderson, Kevin K.; Monroe, Matthew E.; Daly, Don S.

    2006-02-24

    One of the grand challenges in the post-genomic era is proteomics, the characterization of the proteins expressed in a cell under specific conditions. A promising technology for high-throughput proteomics is mass spectrometry, specifically liquid chromatography coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS). The accuracy and certainty of the determinations of peptide identities and abundances provided by LC-FTICR-MS are an important and necessary component of systems biology research. Methods: After a tryptically digested protein mixture is analyzed by LC-FTICR-MS, the observed masses and normalized elution times of the detected features are statistically matched to the theoretical masses and elution times of known peptides listed in a large database. The probability of matching is estimated for each peptide in the reference database using statistical classification methods assuming bivariate Gaussian probability distributions on the uncertainties in the masses and the normalized elution times. A database of 69,220 features from 32 LC-FTICR-MS analyses of a tryptically digested bovine serum albumin (BSA) sample was matched to a database populated with 97% false positive peptides. The percentage of high confidence identifications was found to be consistent with other database search procedures. BSA database peptides were identified with high confidence on average in 14.1 of the 32 analyses. False positives were identified on average in just 2.7 analyses. Using a priori probabilities that contrast peptides from expected and unexpected proteins was shown to perform better in identifying target peptides than using equally likely a priori probabilities. This is because a large percentage of the target peptides were similar to unexpected peptides which were included to be false positives. The use of triplicate analyses with a ''2 out of 3'' reporting rule was shown to have excellent rejection of false positives.

  13. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  14. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  15. Estimation of Transitional Probabilities of Discrete Event Systems from Cross-Sectional Survey and its Application in Tobacco Control

    PubMed Central

    Lin, Feng; Chen, Xinguang

    2009-01-01

    In order to find better strategies for tobacco control, it is often critical to know the transitional probabilities among various stages of tobacco use. Traditionally, such probabilities are estimated by analyzing data from longitudinal surveys that are often time-consuming and expensive to conduct. Since cross-sectional surveys are much easier to conduct, it will be much more practical and useful to estimate transitional probabilities from cross-sectional survey data if possible. However, no previous research has attempted to do this. In this paper, we propose a method to estimate transitional probabilities from cross-sectional survey data. The method is novel and is based on a discrete event system framework. In particular, we introduce state probabilities and transitional probabilities to conventional discrete event system models. We derive various equations that can be used to estimate the transitional probabilities. We test the method using cross-sectional data of the National Survey on Drug Use and Health. The estimated transitional probabilities can be used in predicting the future smoking behavior for decision-making, planning and evaluation of various tobacco control programs. The method also allows a sensitivity analysis that can be used to find the most effective way of tobacco control. Since there are much more cross-sectional survey data in existence than longitudinal ones, the impact of this new method is expected to be significant. PMID:20161437

  16. ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning

    NASA Astrophysics Data System (ADS)

    Sadeh, I.; Abdalla, F. B.; Lahav, O.

    2016-10-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.

  17. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  18. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena.

  19. Estimating the probability for a protein to have a new fold: A statistical computational model

    PubMed Central

    Portugaly, Elon; Linial, Michal

    2000-01-01

    Structural genomics aims to solve a large number of protein structures that represent the protein space. Currently an exhaustive solution for all structures seems prohibitively expensive, so the challenge is to define a relatively small set of proteins with new, currently unknown folds. This paper presents a method that assigns each protein with a probability of having an unsolved fold. The method makes extensive use of protomap, a sequence-based classification, and scop, a structure-based classification. According to protomap, the protein space encodes the relationship among proteins as a graph whose vertices correspond to 13,354 clusters of proteins. A representative fold for a cluster with at least one solved protein is determined after superposition of all scop (release 1.37) folds onto protomap clusters. Distances within the protomap graph are computed from each representative fold to the neighboring folds. The distribution of these distances is used to create a statistical model for distances among those folds that are already known and those that have yet to be discovered. The distribution of distances for solved/unsolved proteins is significantly different. This difference makes it possible to use Bayes' rule to derive a statistical estimate that any protein has a yet undetermined fold. Proteins that score the highest probability to represent a new fold constitute the target list for structural determination. Our predicted probabilities for unsolved proteins correlate very well with the proportion of new folds among recently solved structures (new scop 1.39 records) that are disjoint from our original training set. PMID:10792051

  20. Fall risk probability estimation based on supervised feature learning using public fall datasets.

    PubMed

    Koshmak, Gregory A; Linden, Maria; Loutfi, Amy

    2016-08-01

    Risk of falling is considered among major threats for elderly population and therefore started to play an important role in modern healthcare. With recent development of sensor technology, the number of studies dedicated to reliable fall detection system has increased drastically. However, there is still a lack of universal approach regarding the evaluation of developed algorithms. In the following study we make an attempt to find publicly available fall datasets and analyze similarities among them using supervised learning. After preforming similarity assessment based on multidimensional scaling we indicate the most representative feature vector corresponding to each specific dataset. This vector obtained from a real-life data is subsequently deployed to estimate fall risk probabilities for a statistical fall detection model. Finally, we conclude with some observations regarding the similarity assessment results and provide suggestions towards an efficient approach for evaluation of fall detection studies.

  1. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  2. Detection probabilities and site occupancy estimates for amphibians at Okefenokee National Wildlife Refuge

    USGS Publications Warehouse

    Smith, L.L.; Barichivich, W.J.; Staiger, J.S.; Smith, Kimberly G.; Dodd, C.K.

    2006-01-01

    We conducted an amphibian inventory at Okefenokee National Wildlife Refuge from August 2000 to June 2002 as part of the U.S. Department of the Interior's national Amphibian Research and Monitoring Initiative. Nineteen species of amphibians (15 anurans and 4 caudates) were documented within the Refuge, including one protected species, the Gopher Frog Rana capito. We also collected 1 y of monitoring data for amphibian populations and incorporated the results into the inventory. Detection probabilities and site occupancy estimates for four species, the Pinewoods Treefrog (Hyla femoralis), Pig Frog (Rana grylio), Southern Leopard Frog (R. sphenocephala) and Carpenter Frog (R. virgatipes) are presented here. Detection probabilities observed in this study indicate that spring and summer surveys offer the best opportunity to detect these species in the Refuge. Results of the inventory suggest that substantial changes may have occurred in the amphibian fauna within and adjacent to the swamp. However, monitoring the amphibian community of Okefenokee Swamp will prove difficult because of the logistical challenges associated with a rigorous statistical assessment of status and trends.

  3. Estimating Landholders’ Probability of Participating in a Stewardship Program, and the Implications for Spatial Conservation Priorities

    PubMed Central

    Adams, Vanessa M.; Pressey, Robert L.; Stoeckl, Natalie

    2014-01-01

    The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements - conservation covenants and management agreements - based on payment level and proportion of properties required to be managed. We then spatially predicted landholders’ probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation. PMID:24892520

  4. Estimation of the failure probability during EGS stimulation based on borehole data

    NASA Astrophysics Data System (ADS)

    Meller, C.; Kohl, Th.; Gaucher, E.

    2012-04-01

    In recent times the search for alternative sources of energy has been fostered by the scarcity of fossil fuels. With its ability to permanently provide electricity or heat with little emission of CO2, geothermal energy will have an important share in the energy mix of the future. Within Europe, scientists identified many locations with conditions suitable for Enhanced Geothermal System (EGS) projects. In order to provide sufficiently high reservoir permeability, EGS require borehole stimulations prior to installation of power plants (Gérard et al, 2006). Induced seismicity during water injection into reservoirs EGS systems is a factor that currently cannot be predicted nor controlled. Often, people living near EGS projects are frightened by smaller earthquakes occurring during stimulation or injection. As this fear can lead to widespread disapproval of geothermal power plants, it is appreciable to find a way to estimate the probability of fractures to shear when injecting water with a distinct pressure into a geothermal reservoir. This provides knowledge, which enables to predict the mechanical behavior of a reservoir in response to a change in pore pressure conditions. In the present study an approach for estimation of the shearing probability based on statistical analyses of fracture distribution, orientation and clusters, together with their geological properties is proposed. Based on geophysical logs of five wells in Soultz-sous-Forêts, France, and with the help of statistical tools, the Mohr criterion, geological and mineralogical properties of the host rock and the fracture fillings, correlations between the wells are analyzed. This is achieved with the self-written MATLAB-code Fracdens, which enables us to statistically analyze the log files in different ways. With the application of a pore pressure change, the evolution of the critical pressure on the fractures can be determined. A special focus is on the clay fillings of the fractures and how they reduce

  5. Estimated probabilities and volumes of postwildfire debris flows, a prewildfire evaluation for the upper Blue River watershed, Summit County, Colorado

    USGS Publications Warehouse

    Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.

    2011-01-01

    The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.

  6. Estimating site occupancy rates for aquatic plants using spatial sub-sampling designs when detection probabilities are less than one

    USGS Publications Warehouse

    Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.

    2011-01-01

    Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.

  7. A logistic regression equation for estimating the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Archfield, Stacey A.

    2002-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing perennially at a specific site in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing whether streams are perennial or intermittent at a specific site in Massachusetts. This information is needed to assist these environmental agencies, who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending along the length of each side of the stream from the mean annual high-water line along each side of perennial streams, with exceptions in some urban areas. The equation was developed by relating the verified perennial or intermittent status of a stream site to selected basin characteristics of naturally flowing streams (no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, waste-water discharge, and so forth) in Massachusetts. Stream sites used in the analysis were identified as perennial or intermittent on the basis of review of measured streamflow at sites throughout Massachusetts and on visual observation at sites in the South Coastal Basin, southeastern Massachusetts. Measured or observed zero flow(s) during months of extended drought as defined by the 310 Code of Massachusetts Regulations (CMR) 10.58(2)(a) were not considered when designating the perennial or intermittent status of a stream site. The database used to develop the equation included a total of 305 stream sites (84 intermittent- and 89 perennial-stream sites in the State, and 50 intermittent- and 82 perennial-stream sites in the South Coastal Basin). Stream sites included in the database had drainage areas that ranged from 0.14 to 8.94 square miles in the State and from 0.02 to 7.00 square miles in the South Coastal Basin.Results of the logistic regression analysis

  8. Estimating a neutral reference for electroencephalographic recordings: The importance of using a high-density montage and a realistic head model

    PubMed Central

    Liu, Quanying; Balsters, Joshua H.; Baechinger, Marc; van der Groen, Onno; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Objective In electroencephalography (EEG) measurements, the signal of each recording electrode is contrasted with a reference electrode or a combination of electrodes. The estimation of a neutral reference is a long-standing issue in EEG data analysis, which has motivated the proposal of different re-referencing methods, among which linked-mastoid re-referencing (LMR), average re-referencing (AR) and reference electrode standardization technique (REST). In this study we quantitatively assessed the extent to which the use of a high-density montage and a realistic head model can impact on the optimal estimation of a neutral reference for EEG recordings. Approach Using simulated recordings generated by projecting specific source activity over the sensors, we assessed to what extent AR, REST and LMR may distort the scalp topography. We examined the impact electrode coverage has on AR and REST, and how accurate the REST reconstruction is for realistic and less realistic (three-layer and single-layer spherical) head models, and with possible uncertainty in the electrode positions. We assessed LMR, AR and REST also in the presence of typical EEG artifacts that are mixed in the recordings. Finally, we applied them to real EEG data collected in a target detection experiment to corroborate our findings on simulated data. Main results Both AR and REST have relatively low reconstruction errors compared to LMR, and that REST is less sensitive than AR and LMR to artifacts mixed in the EEG data. For both AR and REST, high electrode density yields low re-referencing reconstruction errors. A realistic head model is critical for REST, leading to a more accurate estimate of a neutral reference compared to spherical head models. With a low-density montage, REST shows a more reliable reconstruction than AR either with a realistic or a three-layer spherical head model. Conversely, with a high-density montage AR yields better results unless precise information on electrode positions is

  9. Estimating a neutral reference for electroencephalographic recordings: the importance of using a high-density montage and a realistic head model

    NASA Astrophysics Data System (ADS)

    Liu, Quanying; Balsters, Joshua H.; Baechinger, Marc; van der Groen, Onno; Wenderoth, Nicole; Mantini, Dante

    2015-10-01

    Objective. In electroencephalography (EEG) measurements, the signal of each recording electrode is contrasted with a reference electrode or a combination of electrodes. The estimation of a neutral reference is a long-standing issue in EEG data analysis, which has motivated the proposal of different re-referencing methods, among which linked-mastoid re-referencing (LMR), average re-referencing (AR) and reference electrode standardization technique (REST). In this study we quantitatively assessed the extent to which the use of a high-density montage and a realistic head model can impact on the optimal estimation of a neutral reference for EEG recordings. Approach. Using simulated recordings generated by projecting specific source activity over the sensors, we assessed to what extent AR, REST and LMR may distort the scalp topography. We examined the impact electrode coverage has on AR and REST, and how accurate the REST reconstruction is for realistic and less realistic (three-layer and single-layer spherical) head models, and with possible uncertainty in the electrode positions. We assessed LMR, AR and REST also in the presence of typical EEG artifacts that are mixed in the recordings. Finally, we applied them to real EEG data collected in a target detection experiment to corroborate our findings on simulated data. Main results. Both AR and REST have relatively low reconstruction errors compared to LMR, and that REST is less sensitive than AR and LMR to artifacts mixed in the EEG data. For both AR and REST, high electrode density yields low re-referencing reconstruction errors. A realistic head model is critical for REST, leading to a more accurate estimate of a neutral reference compared to spherical head models. With a low-density montage, REST shows a more reliable reconstruction than AR either with a realistic or a three-layer spherical head model. Conversely, with a high-density montage AR yields better results unless precise information on electrode positions

  10. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David T.; Ishii, Audrey

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, generalized skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at

  11. A Probability Model for Evaluating the Bias and Precision of Influenza Vaccine Effectiveness Estimates from Case-Control Studies

    PubMed Central

    Haber, M.; An, Q.; Foppa, I. M.; Shay, D. K.; Ferdinands, J. M.; Orenstein, W. A.

    2014-01-01

    Summary As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARI) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly-used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs. PMID:25147970

  12. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  13. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  14. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data.

    PubMed

    Das, Jayajit; Mukherjee, Sayak; Hodge, Susan E

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, n ≤ m, and the task is relatively straightforward for well-defined functional relationships. For example, if Y1 and Y2 are independent random variables, each uniform on [0, 1], one can determine the distribution of X = Y1 + Y2; here m = 2 and n = 1. However, biological and physical situations can arise where n > m and the functional relation Y→X is non-unique. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  15. Dictionary-based probability density function estimation for high-resolution SAR data

    NASA Astrophysics Data System (ADS)

    Krylov, Vladimir; Moser, Gabriele; Serpico, Sebastiano B.; Zerubia, Josiane

    2009-02-01

    In the context of remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of pixel intensities. In this work, we develop a parametric finite mixture model for the statistics of pixel intensities in high resolution synthetic aperture radar (SAR) images. This method is an extension of previously existing method for lower resolution images. The method integrates the stochastic expectation maximization (SEM) scheme and the method of log-cumulants (MoLC) with an automatic technique to select, for each mixture component, an optimal parametric model taken from a predefined dictionary of parametric probability density functions (pdf). The proposed dictionary consists of eight state-of-the-art SAR-specific pdfs: Nakagami, log-normal, generalized Gaussian Rayleigh, Heavy-tailed Rayleigh, Weibull, K-root, Fisher and generalized Gamma. The designed scheme is endowed with the novel initialization procedure and the algorithm to automatically estimate the optimal number of mixture components. The experimental results with a set of several high resolution COSMO-SkyMed images demonstrate the high accuracy of the designed algorithm, both from the viewpoint of a visual comparison of the histograms, and from the viewpoint of quantitive accuracy measures such as correlation coefficient (above 99,5%). The method proves to be effective on all the considered images, remaining accurate for multimodal and highly heterogeneous scenes.

  16. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  17. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    NASA Astrophysics Data System (ADS)

    Lussana, C.

    2013-04-01

    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  18. Using of bayesian networks to estimate the probability of "NATECH" scenario occurrence

    NASA Astrophysics Data System (ADS)

    Dobes, Pavel; Dlabka, Jakub; Jelšovská, Katarína; Polorecká, Mária; Baudišová, Barbora; Danihelka, Pavel

    2015-04-01

    In the twentieth century, implementation of Bayesian statistics and probability was not much used (may be it wasn't a preferred approach) in the area of natural and industrial risk analysis and management. Neither it was used within analysis of so called NATECH accidents (chemical accidents triggered by natural events, such as e.g. earthquakes, floods, lightning etc.; ref. E. Krausmann, 2011, doi:10.5194/nhess-11-921-2011). Main role, from the beginning, played here so called "classical" frequentist probability (ref. Neyman, 1937), which rely up to now especially on the right/false results of experiments and monitoring and didn't enable to count on expert's beliefs, expectations and judgements (which is, on the other hand, one of the once again well known pillars of Bayessian approach to probability). In the last 20 or 30 years, there is possible to observe, through publications and conferences, the Renaissance of Baysssian statistics into many scientific disciplines (also into various branches of geosciences). The necessity of a certain level of trust in expert judgment within risk analysis is back? After several decades of development on this field, it could be proposed following hypothesis (to be checked): "We couldn't estimate probabilities of complex crisis situations and their TOP events (many NATECH events could be classified as crisis situations or emergencies), only by classical frequentist approach, but also by using of Bayessian approach (i.e. with help of prestaged Bayessian Network including expert belief and expectation as well as classical frequentist inputs). Because - there is not always enough quantitative information from monitoring of historical emergencies, there could be several dependant or independant variables necessary to consider and in generally - every emergency situation always have a little different run." In this topic, team of authors presents its proposal of prestaged typized Bayessian network model for specified NATECH scenario

  19. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    PubMed

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014).

  20. Is expert opinion reliable when estimating transition probabilities? The case of HCV-related cirrhosis in Egypt

    PubMed Central

    2014-01-01

    Background Data on HCV-related cirrhosis progression are scarce in developing countries in general, and in Egypt in particular. The objective of this study was to estimate the probability of death and transition between different health stages of HCV (compensated cirrhosis, decompensated cirrhosis and hepatocellular carcinoma) for an Egyptian population of patients with HCV-related cirrhosis. Methods We used the “elicitation of expert opinions” method to obtain collective knowledge from a panel of 23 Egyptian experts (among whom 17 were hepatologists or gastroenterologists and 2 were infectiologists). The questionnaire was based on virtual medical cases and asked the experts to assess probability of death or probability of various cirrhosis complications. The design was a Delphi study: we attempted to obtain a consensus between experts via a series of questionnaires interspersed with group response feedback. Results We found substantial disparity between experts’ answers, and no consensus was reached at the end of the process. Moreover, we obtained high death probability and high risk of hepatocellular carcinoma. The annual transition probability to death was estimated at between 10.1% and 61.5% and the annual probability of occurrence of hepatocellular carcinoma was estimated at between 16.8% and 58.9% (depending on age, gender, time spent in cirrhosis and cirrhosis severity). Conclusions Our results show that eliciting expert opinions is not suited for determining the natural history of diseases due to practitioners’ difficulties in evaluating quantities. Cognitive bias occurring during this type of study might explain our results. PMID:24635942

  1. Small-area estimation of the probability of toxocariasis in New York City based on sociodemographic neighborhood composition.

    PubMed

    Walsh, Michael G; Haseeb, M A

    2014-01-01

    Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP) in developed countries, and may constitute the most important NIP in the United States (US) given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City.

  2. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-06-01

    Dissolved dense nonaqueous-phase liquid plumes are persistent, widespread problems in the DOE complex. At the Idaho National Engineering and Environmental Laboratory, dissolved trichloroethylene (TCE) is disappearing from the Snake River Plain aquifer (SRPA) by natural attenuation, a finding that saves significant site restoration costs. Acceptance of monitored natural attenuation as a preferred treatment technology requires direct evidence of the processes and rates of the degradation. Our proposal aims to provide that evidence for one such site by testing two hypotheses. First, we believe that realistic values for in situ rates of TCE cometabolism can be obtained by sustaining the putative microorganisms at the low catabolic activities consistent with aquifer conditions. Second, the patterns of functional gene expression evident in these communities under starvation conditions while carrying out TCE cometabolism can be used to diagnose the cometabolic activity in the aquifer itself. Using the cometabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained at this location and validate the long-term stewardship of this plume. Realistic terms for cometabolism of TCE will provide marked improvements in DOE's ability to predict and monitor natural attenuation of chlorinated organics at other sites, increase the acceptability of this solution, and provide significant economic and health benefits through this noninvasive remediation strategy. Finally, this project aims to derive valuable genomic information about the functional attributes of subsurface microbial communities upon which DOE must depend to resolve some of its most difficult contamination issues.

  3. Threatened species and the potential loss of phylogenetic diversity: conservation scenarios based on estimated extinction probabilities and phylogenetic risk analysis.

    PubMed

    Faith, Daniel P

    2008-12-01

    New species conservation strategies, including the EDGE of Existence (EDGE) program, have expanded threatened species assessments by integrating information about species' phylogenetic distinctiveness. Distinctiveness has been measured through simple scores that assign shared credit among species for evolutionary heritage represented by the deeper phylogenetic branches. A species with a high score combined with a high extinction probability receives high priority for conservation efforts. Simple hypothetical scenarios for phylogenetic trees and extinction probabilities demonstrate how such scoring approaches can provide inefficient priorities for conservation. An existing probabilistic framework derived from the phylogenetic diversity measure (PD) properly captures the idea of shared responsibility for the persistence of evolutionary history. It avoids static scores, takes into account the status of close relatives through their extinction probabilities, and allows for the necessary updating of priorities in light of changes in species threat status. A hypothetical phylogenetic tree illustrates how changes in extinction probabilities of one or more species translate into changes in expected PD. The probabilistic PD framework provided a range of strategies that moved beyond expected PD to better consider worst-case PD losses. In another example, risk aversion gave higher priority to a conservation program that provided a smaller, but less risky, gain in expected PD. The EDGE program could continue to promote a list of top species conservation priorities through application of probabilistic PD and simple estimates of current extinction probability. The list might be a dynamic one, with all the priority scores updated as extinction probabilities change. Results of recent studies suggest that estimation of extinction probabilities derived from the red list criteria linked to changes in species range sizes may provide estimated probabilities for many different species

  4. A new technique for predicting geosynchronous satellite collision probability

    NASA Technical Reports Server (NTRS)

    Mccormick, B.

    1986-01-01

    A new technique has been developed to predict the probability of an expired geosynchronous satellite colliding with an active satellite. This new technique employs deterministic methods for modeling the motion of satellites and applies statistical techniques to estimate the collision probability. The collision probability is used to estimate the expected time between collisions based on realistic distributions of expired and active satellites. The primary advantage of this new technique is that realistic distributions can be used in the prediction process instead of uniform distributions as has been used in previous techniques. The expected time between collisions based on a current NORAD database is estimated to be in the hundreds of years.

  5. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers. Third, we have systematically considered the aquifer contaminants at different locations in plumes at other DOE sites in order to determine whether MNA is a broadly applicable remediation strategy for chlorinated hydrocarbons (North Wind Inc.). Realistic terms for co-metabolism of TCE will provide marked improvements in DOE’s ability to predict and

  6. Probability of fracture and life extension estimate of the high-flux isotope reactor vessel

    SciTech Connect

    Chang, S.J.

    1998-08-01

    The state of the vessel steel embrittlement as a result of neutron irradiation can be measured by its increase in ductile-brittle transition temperature (DBTT) for fracture, often denoted by RT{sub NDT} for carbon steel. This transition temperature can be calibrated by the drop-weight test and, sometimes, by the Charpy impact test. The life extension for the high-flux isotope reactor (HFIR) vessel is calculated by using the method of fracture mechanics that is incorporated with the effect of the DBTT change. The failure probability of the HFIR vessel is limited as the life of the vessel by the reactor core melt probability of 10{sup {minus}4}. The operating safety of the reactor is ensured by periodic hydrostatic pressure test (hydrotest). The hydrotest is performed in order to determine a safe vessel static pressure. The fracture probability as a result of the hydrostatic pressure test is calculated and is used to determine the life of the vessel. Failure to perform hydrotest imposes the limit on the life of the vessel. The conventional method of fracture probability calculations such as that used by the NRC-sponsored PRAISE CODE and the FAVOR CODE developed in this Laboratory are based on the Monte Carlo simulation. Heavy computations are required. An alternative method of fracture probability calculation by direct probability integration is developed in this paper. The present approach offers simple and expedient ways to obtain numerical results without losing any generality. In this paper, numerical results on (1) the probability of vessel fracture, (2) the hydrotest time interval, and (3) the hydrotest pressure as a result of the DBTT increase are obtained.

  7. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F.S.; Crawford, R.L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers.

  8. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    SciTech Connect

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  9. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.

  10. Empirical estimation of the conditional probability of natech events within the United States.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea

    2011-06-01

    Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry.

  11. Absolute probability estimates of lethal vessel strikes to North Atlantic right whales in Roseway Basin, Scotian Shelf.

    PubMed

    van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T

    2012-10-01

    Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.

  12. Estimation of the Probability of Labor Force Participation of the AFDC Population-At-Risk

    DTIC Science & Technology

    1977-01-01

    probability of labor force participation (LFP) of female family heads with dependent children present, the Aid to Families with Dependent Children (AFDC... female and if dependent children were present, which may be viewed as the AFDC pop- ulation-at-risk.l Only those family heads who were in the civilian

  13. Sample Size Determination for Estimation of Sensor Detection Probabilities Based on a Test Variable

    DTIC Science & Technology

    2007-06-01

    interest. 15. NUMBER OF PAGES 121 14. SUBJECT TERMS Sample Size, Binomial Proportion, Confidence Interval , Coverage Probability, Experimental...THE STUDY ..........................5 II. LITERATURE REVIEW .......................................7 A. CONFIDENCE INTERVAL METHODS FOR THE...BINOMIAL PROPORTION .........................................7 1. The Wald Confidence Interval ..................7 2. The Wilson Score Confidence Interval .........13

  14. Predicting Human Performance. I. Estimating the Probability of Visual Detection. Final Report.

    ERIC Educational Resources Information Center

    Teichner, Warren H.; Krebs, Marjorie J.

    This review is one in a series intended to develop methods which maximize the use of the existing scientific literature as a basis for predicting human performance. It is concerned with sensory performance in target detection, defined in terms of the "probability of detection" of a flash of light. Two conditions of detection are…

  15. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.

  16. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  17. Use of portable antennas to estimate abundance of PIT-tagged fish in small streams: Factors affecting detection probability

    USGS Publications Warehouse

    O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.

    2010-01-01

    Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.

  18. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-07-26

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  19. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  20. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  1. INCLUDING TRANSITION PROBABILITIES IN NEST SURVIVAL ESTIMATION: A MAYFIELD MARKOV CHAIN

    EPA Science Inventory

    This manuscript is primarily an exploration of the statistical properties of nest-survival estimates for terrestrial songbirds. The Mayfield formulation described herein should allow researchers to test for complicated effects of stressors on daily survival and overall success, i...

  2. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  3. Performance of methods for estimating the effect of covariates on group membership probabilities in group-based trajectory models.

    PubMed

    Davies, Christopher E; Giles, Lynne C; Glonek, Gary Fv

    2017-01-01

    One purpose of a longitudinal study is to gain insight of how characteristics at earlier points in time can impact on subsequent outcomes. Typically, the outcome variable varies over time and the data for each individual can be used to form a discrete path of measurements, that is a trajectory. Group-based trajectory modelling methods seek to identify subgroups of individuals within a population with trajectories that are more similar to each other than to trajectories in distinct groups. An approach to modelling the influence of covariates measured at earlier time points in the group-based setting is to consider models wherein these covariates affect the group membership probabilities. Models in which prior covariates impact the trajectories directly are also possible but are not considered here. In the present study, we compared six different methods for estimating the effect of covariates on the group membership probabilities, which have different approaches to account for the uncertainty in the group membership assignment. We found that when investigating the effect of one or several covariates on a group-based trajectory model, the full likelihood approach minimized the bias in the estimate of the covariate effect. In this '1-step' approach, the estimation of the effect of covariates and the trajectory model are carried out simultaneously. Of the '3-step' approaches, where the effect of the covariates is assessed subsequent to the estimation of the group-based trajectory model, only Vermunt's improved 3 step resulted in bias estimates similar in size to the full likelihood approach. The remaining methods considered resulted in considerably higher bias in the covariate effect estimates and should not be used. In addition to the bias empirically demonstrated for the probability regression approach, we have shown analytically that it is biased in general.

  4. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  5. Inverse problems in cancellous bone: estimation of the ultrasonic properties of fast and slow waves using Bayesian probability theory.

    PubMed

    Anderson, Christian C; Bauer, Adam Q; Holland, Mark R; Pakula, Michal; Laugier, Pascal; Bretthorst, G Larry; Miller, James G

    2010-11-01

    Quantitative ultrasonic characterization of cancellous bone can be complicated by artifacts introduced by analyzing acquired data consisting of two propagating waves (a fast wave and a slow wave) as if only one wave were present. Recovering the ultrasonic properties of overlapping fast and slow waves could therefore lead to enhancement of bone quality assessment. The current study uses Bayesian probability theory to estimate phase velocity and normalized broadband ultrasonic attenuation (nBUA) parameters in a model of fast and slow wave propagation. Calculations are carried out using Markov chain Monte Carlo with simulated annealing to approximate the marginal posterior probability densities for parameters in the model. The technique is applied to simulated data, to data acquired on two phantoms capable of generating two waves in acquired signals, and to data acquired on a human femur condyle specimen. The models are in good agreement with both the simulated and experimental data, and the values of the estimated ultrasonic parameters fall within expected ranges.

  6. Automatic estimation of sleep level for nap based on conditional probability of sleep stages and an exponential smoothing method.

    PubMed

    Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi

    2013-01-01

    An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.

  7. Speech enhancement via two-stage dual tree complex wavelet packet transform with a speech presence probability estimator

    NASA Astrophysics Data System (ADS)

    Sun, Pengfei; Qin, Jun

    2017-02-01

    In this paper, a two-stage dual tree complex wavelet packet transform (DTCWPT) based speech enhancement algorithm has been proposed, in which a speech presence probability (SPP) estimator and a generalized minimum mean squared error (MMSE) estimator are developed. To overcome the drawback of signal distortions caused by down sampling of WPT, a two-stage analytic decomposition concatenating undecimated WPT (UWPT) and decimated WPT is employed. An SPP estimator in the DTCWPT domain is derived based on a generalized Gamma distribution of speech, and Gaussian noise assumption. The validation results show that the proposed algorithm can obtain enhanced perceptual evaluation of speech quality (PESQ), and segmental signal-to-noise ratio (SegSNR) at low SNR nonstationary noise, compared with other four state-of-the-art speech enhancement algorithms, including optimally modified LSA (OM-LSA), soft masking using a posteriori SNR uncertainty (SMPO), a posteriori SPP based MMSE estimation (MMSE-SPP), and adaptive Bayesian wavelet thresholding (BWT).

  8. Probability distributions of the logarithm of inter-spike intervals yield accurate entropy estimates from small datasets.

    PubMed

    Dorval, Alan D

    2008-08-15

    The maximal information that the spike train of any neuron can pass on to subsequent neurons can be quantified as the neuronal firing pattern entropy. Difficulties associated with estimating entropy from small datasets have proven an obstacle to the widespread reporting of firing pattern entropies and more generally, the use of information theory within the neuroscience community. In the most accessible class of entropy estimation techniques, spike trains are partitioned linearly in time and entropy is estimated from the probability distribution of firing patterns within a partition. Ample previous work has focused on various techniques to minimize the finite dataset bias and standard deviation of entropy estimates from under-sampled probability distributions on spike timing events partitioned linearly in time. In this manuscript we present evidence that all distribution-based techniques would benefit from inter-spike intervals being partitioned in logarithmic time. We show that with logarithmic partitioning, firing rate changes become independent of firing pattern entropy. We delineate the entire entropy estimation process with two example neuronal models, demonstrating the robust improvements in bias and standard deviation that the logarithmic time method yields over two widely used linearly partitioned time approaches.

  9. Estimated probability of arsenic in groundwater from bedrock aquifers in New Hampshire, 2011

    USGS Publications Warehouse

    Ayotte, Joseph D.; Cahillane, Matthew; Hayes, Laura; Robinson, Keith W.

    2012-01-01

    The statewide maps generated by the probability models are not designed to predict arsenic concentration in any single well, but they are expected to provide useful information in areas of the State that currently contain little to no data on arsenic concentration. They also may aid in resource decision making, in determining potential risk for private wells, and in ecological-level analysis of disease outcomes. The approach for modeling arsenic in groundwater could also be applied to other environmental contaminants that have potential implications for human health, such as uranium, radon, fluoride, manganese, volatile organic compounds, nitrate, and bacteria.

  10. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2015-06-01

    Riverbank erosion affects river morphology and local habitat and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict vulnerable to erosion areas is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a combined deterministic and statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the vulnerable to erosion locations by quantifying the potential eroded area. The derived results are used to determine validation locations for the statistical tool performance evaluation. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed methodology is easy to use, accurate and can be applied to any region and river.

  11. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2016-01-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a statistical methodology is proposed to predict the probability of the presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the logistic regression methodology. It is developed in two forms, logistic regression and locally weighted logistic regression, which both deliver useful and accurate results. The second form, though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use and accurate and can be applied to any region and river.

  12. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  13. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  14. EVALUATING PROBABILITY SAMPLING STRATEGIES FOR ESTIMATING REDD COUNTS: AN EXAMPLE WITH CHINOOK SALMON (Oncorhynchus tshawytscha)

    EPA Science Inventory

    Precise, unbiased estimates of population size are an essential tool for fisheries management. For a wide variety of salmonid fishes, redd counts from a sample of reaches are commonly used to monitor annual trends in abundance. Using a 9-year time series of georeferenced censuses...

  15. Estimating present day extreme water level exceedance probabilities around the coastline of Australia: tropical cyclone-induced storm surges

    NASA Astrophysics Data System (ADS)

    Haigh, Ivan D.; MacPherson, Leigh R.; Mason, Matthew S.; Wijeratne, E. M. S.; Pattiaratchi, Charitha B.; Crompton, Ryan P.; George, Steve

    2014-01-01

    The incidence of major storm surges in the last decade have dramatically emphasized the immense destructive capabilities of extreme water level events, particularly when driven by severe tropical cyclones. Given this risk, it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood and erosion management, engineering and for future land-use planning and to ensure the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. Australia has a long history of coastal flooding from tropical cyclones. Using a novel integration of two modeling techniques, this paper provides the first estimates of present day extreme water level exceedance probabilities around the whole coastline of Australia, and the first estimates that combine the influence of astronomical tides, storm surges generated by both extra-tropical and tropical cyclones, and seasonal and inter-annual variations in mean sea level. Initially, an analysis of tide gauge records has been used to assess the characteristics of tropical cyclone-induced surges around Australia. However, given the dearth (temporal and spatial) of information around much of the coastline, and therefore the inability of these gauge records to adequately describe the regional climatology, an observationally based stochastic tropical cyclone model has been developed to synthetically extend the tropical cyclone record to 10,000 years. Wind and pressure fields derived for these synthetically generated events have then been used to drive a hydrodynamic model of the Australian continental shelf region with annual maximum water levels extracted to estimate exceedance probabilities around the coastline. To validate this methodology, selected historic storm surge events have been simulated and resultant storm surges compared with gauge records. Tropical cyclone induced exceedance probabilities have been combined with

  16. Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.

  17. Estimation of the probability of exposure to metalworking fluids in a population-based case-control study

    PubMed Central

    Park, Dong-Uk; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Armenti, Karla R.; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe here an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (10-90%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally, 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and the US production levels by decade found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. PMID:25256317

  18. Estimating the probability of IQ impairment from blood phenylalanine for phenylketonuria patients: a hierarchical meta-analysis.

    PubMed

    Fonnesbeck, Christopher J; McPheeters, Melissa L; Krishnaswami, Shanthi; Lindegren, Mary Louise; Reimschisel, Tyler

    2013-09-01

    Though the control of blood phenylalanine (Phe) levels is essential for minimizing impairment in individuals with phenylketonuria (PKU), the empirical basis for the selection of specific blood Phe levels as targets has not been evaluated. We evaluated the current evidence that particular Phe levels are optimal for minimizing or avoiding cognitive impairment in individuals with PKU. This work uses meta-estimates of blood Phe-IQ correlation to predict the probability of low IQ for a range of Phe levels. We believe this metric is easily interpretable by clinicians, and hence useful in making recommendations for Phe intake. The median baseline association of Phe with IQ was estimated to be negative, both in the context of historical (median = -0.026, 95 % BCI = [-0.040, -0.013]) and concurrent (-0.007, [-0.014, 0.000]) measurement of Phe relative to IQ. The estimated additive fixed effect of critical period Phe measurement was also nominally negative for historical measurement (-0.010, [-0.022, 0.003]) and positive for concurrent measurement (0.007, [-0.018, 0.035]). Probabilities corresponding to historical measures of blood Phe demonstrated an increasing chance of low IQ with increasing Phe, with a stronger association seen between blood Phe measured during the critical period than later. In contrast, concurrently-measured Phe was more weakly correlated with the probability of low IQ, though the correlation is still positive, irrespective of whether Phe was measured during the critical or non-critical period. This meta-analysis illustrates the utility of a Bayesian hierarchical approach for not only combining information from a set of candidate studies, but also for combining different types of data to estimate parameters of interest.

  19. Use of inverse probability weighting to adjust for non-participation in estimating brain volumes in schizophrenia patients.

    PubMed

    Haapea, Marianne; Veijola, Juha; Tanskanen, Päivikki; Jääskeläinen, Erika; Isohanni, Matti; Miettunen, Jouko

    2011-12-30

    Low participation is a potential source of bias in population-based studies. This article presents use of inverse probability weighting (IPW) in adjusting for non-participation in estimation of brain volumes among subjects with schizophrenia. Altogether 101 schizophrenia subjects and 187 non-psychotic comparison subjects belonging to the Northern Finland 1966 Birth Cohort were invited to participate in a field study during 1999-2001. Volumes of grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) were compared between the 54 participating schizophrenia subjects and 100 comparison subjects. IPW by illness-related auxiliary variables did not affect the estimated GM and WM mean volumes, but increased the estimated CSF mean volume in schizophrenia subjects. When adjusted for intracranial volume and family history of psychosis, IPW led to smaller estimated GM and WM mean volumes. Especially IPW by a disability pension and a higher amount of hospitalisation due to psychosis had effect on estimated mean brain volumes. The IPW method can be used to improve estimates affected by non-participation by reflecting the true differences in the target population.

  20. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    SciTech Connect

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-02-15

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  1. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers-Part I

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry

    2008-01-01

    Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).

  2. Toward Realistic Acquisition Schedule Estimates

    DTIC Science & Technology

    2016-04-30

    inside and outside the government (e.g., Congressional Research Service [Gertler, 2015]; Center for Strategic and International Studies [ Harrison , 2016...prioritizing seems both necessary and inevitable. A number of options are available, none of them pleasant (Gertler, 2015; Hale, 2016; Harrison , 2016). And... Harrison , T. (2016). Defense modernization plans through the 2020s: Addressing the bow wave. Retrieved from http://csis.org/publication/defense

  3. Estimated probability of postwildfire debris flows in the 2012 Whitewater-Baldy Fire burn area, southwestern New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Matherne, Anne Marie; Verdin, Kristine L.

    2012-01-01

    In May and June 2012, the Whitewater-Baldy Fire burned approximately 1,200 square kilometers (300,000 acres) of the Gila National Forest, in southwestern New Mexico. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from 128 basins burned by the Whitewater-Baldy Fire. A pair of empirical hazard-assessment models developed by using data from recently burned basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and for selected drainage basins within the burned area. The models incorporate measures of areal burned extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. In response to the 2-year-recurrence, 30-minute-duration rainfall, modeling indicated that four basins have high probabilities of debris-flow occurrence (greater than or equal to 80 percent). For the 10-year-recurrence, 30-minute-duration rainfall, an additional 14 basins are included, and for the 25-year-recurrence, 30-minute-duration rainfall, an additional eight basins, 20 percent of the total, have high probabilities of debris-flow occurrence. In addition, probability analysis along the stream segments can identify specific reaches of greatest concern for debris flows within a basin. Basins with a high probability of debris-flow occurrence were concentrated in the west and central parts of the burned area, including tributaries to Whitewater Creek, Mineral Creek, and Willow Creek. Estimated debris-flow volumes ranged from about 3,000-4,000 cubic meters (m3) to greater than 500,000 m3 for all design storms modeled. Drainage basins with estimated volumes greater than 500,000 m3 included tributaries to Whitewater Creek, Willow

  4. Estimate of the penetrance of BRCA mutation and the COS software for the assessment of BRCA mutation probability.

    PubMed

    Berrino, Jacopo; Berrino, Franco; Francisci, Silvia; Peissel, Bernard; Azzollini, Jacopo; Pensotti, Valeria; Radice, Paolo; Pasanisi, Patrizia; Manoukian, Siranoush

    2015-03-01

    We have designed the user-friendly COS software with the intent to improve estimation of the probability of a family carrying a deleterious BRCA gene mutation. The COS software is similar to the widely-used Bayesian-based BRCAPRO software, but it incorporates improved assumptions on cancer incidence in women with and without a deleterious mutation, takes into account relatives up to the fourth degree and allows researchers to consider an hypothetical third gene or a polygenic model of inheritance. Since breast cancer incidence and penetrance increase over generations, we estimated birth-cohort-specific incidence and penetrance curves. We estimated breast and ovarian cancer penetrance in 384 BRCA1 and 229 BRCA2 mutated families. We tested the COS performance in 436 Italian breast/ovarian cancer families including 79 with BRCA1 and 27 with BRCA2 mutations. The area under receiver operator curve (AUROC) was 84.4 %. The best probability threshold for offering the test was 22.9 %, with sensitivity 80.2 % and specificity 80.3 %. Notwithstanding very different assumptions, COS results were similar to BRCAPRO v6.0.

  5. A stochastic formulation of the gompertzian growth model for in vitro bactericidal kinetics: parameter estimation and extinction probability.

    PubMed

    Ferrante, L; Bompadre, S; Leone, L; Montanari, M P

    2005-06-01

    Time-kill curves have frequently been employed to study the antimicrobial effects of antibiotics. The relevance of pharmacodynamic modeling to these investigations has been emphasized in many studies of bactericidal kinetics. Stochastic models are needed that take into account the randomness of the mechanisms of both bacterial growth and bacteria-drug interactions. However, most of the models currently used to describe antibiotic activity against microorganisms are deterministic. In this paper we examine a stochastic differential equation representing a stochastic version of a pharmacodynamic model of bacterial growth undergoing random fluctuations, and derive its solution, mean value and covariance structure. An explicit likelihood function is obtained both when the process is observed continuously over a period of time and when data is sampled at time points, as is the custom in these experimental conditions. Some asymptotic properties of the maximum likelihood estimators for the model parameters are discussed. The model is applied to analyze in vitro time-kill data and to estimate model parameters; the probability of the bacterial population size dropping below some critical threshold is also evaluated. Finally, the relationship between bacterial extinction probability and the pharmacodynamic parameters estimated is discussed.

  6. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  7. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    SciTech Connect

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; Barbier, Charlotte N.

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.

  8. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  9. Analysis of a probability-based SATCOM situational awareness model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Martin, Todd W.; Chang, Kuo-Chu; Tian, Xin; Chen, Genshe

    2016-05-01

    Emerging satellite communication (SATCOM) systems are envisioned to incorporate advanced capabilities for dynamically adapting link and network configurations to meet user performance needs. These advanced capabilities require an understanding of the operating environment as well as the potential outcomes of adaptation decisions. A SATCOM situational awareness and decision-making approach is needed that represents the cause and effect linkage of relevant phenomenology and operating conditions on link performance. Similarly, the model must enable a corresponding diagnostic capability that allows SATCOM payload managers to assess likely causes of observed effects. Prior work demonstrated the ability to use a probabilistic reasoning model for a SATCOM situational awareness model. It provided the theoretical basis and demonstrated the ability to realize such a model. This paper presents an analysis of the probabilistic reasoning approach in the context of its ability to be used for diagnostic purposes. A quantitative assessment is presented to demonstrate the impact of uncertainty on estimation accuracy for several key parameters. The paper also discusses how the results could be used by a higher-level reasoning process to evaluate likely causes of performance shortfalls such as atmospheric conditions, pointing errors, and jamming.

  10. Developing an Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  11. Spinodal Decomposition for the Cahn-Hilliard Equation in Higher Dimensions.Part I: Probability and Wavelength Estimate

    NASA Astrophysics Data System (ADS)

    Maier-Paape, Stanislaus; Wanner, Thomas

    This paper is the first in a series of two papers addressing the phenomenon of spinodal decomposition for the Cahn-Hilliard equation where , is a bounded domain with sufficiently smooth boundary, and f is cubic-like, for example f(u) =u-u3. We will present the main ideas of our approach and explain in what way our method differs from known results in one space dimension due to Grant [26]. Furthermore, we derive certain probability and wavelength estimates. The probability estimate is needed to understand why in a neighborhood of a homogeneous equilibrium u0≡μ of the Cahn-Hilliard equation, with mass μ in the spinodal region, a strongly unstable manifold has dominating effects. This is demonstrated for the linearized equation, but will be essential for the nonlinear setting in the second paper [37] as well. Moreover, we introduce the notion of a characteristic wavelength for the strongly unstable directions.

  12. Estimation of reliability and dynamic property for polymeric material at high strain rate using SHPB technique and probability theory

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin

    2008-11-01

    A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.

  13. Speech enhancement via two-stage dual tree complex wavelet packet transform with a speech presence probability estimator.

    PubMed

    Sun, Pengfei; Qin, Jun

    2017-02-01

    In this paper, a two-stage dual tree complex wavelet packet transform (DTCWPT) based speech enhancement algorithm has been proposed, in which a speech presence probability (SPP) estimator and a generalized minimum mean squared error (MMSE) estimator are developed. To overcome the drawback of signal distortions caused by down sampling of wavelet packet transform (WPT), a two-stage analytic decomposition concatenating undecimated wavelet packet transform (UWPT) and decimated WPT is employed. An SPP estimator in the DTCWPT domain is derived based on a generalized Gamma distribution of speech, and Gaussian noise assumption. The validation results show that the proposed algorithm can obtain enhanced perceptual evaluation of speech quality (PESQ), and segmental signal-to-noise ratio (SegSNR) at low signal-to-noise ratio (SNR) nonstationary noise, compared with four other state-of-the-art speech enhancement algorithms, including optimally modified log-spectral amplitude (OM-LSA), soft masking using a posteriori SNR uncertainty (SMPO), a posteriori SPP based MMSE estimation (MMSE-SPP), and adaptive Bayesian wavelet thresholding (BWT).

  14. Modeling the relationship between most probable number (MPN) and colony-forming unit (CFU) estimates of fecal coliform concentration.

    PubMed

    Gronewold, Andrew D; Wolpert, Robert L

    2008-07-01

    Most probable number (MPN) and colony-forming-unit (CFU) estimates of fecal coliform bacteria concentration are common measures of water quality in coastal shellfish harvesting and recreational waters. Estimating procedures for MPN and CFU have intrinsic variability and are subject to additional uncertainty arising from minor variations in experimental protocol. It has been observed empirically that the standard multiple-tube fermentation (MTF) decimal dilution analysis MPN procedure is more variable than the membrane filtration CFU procedure, and that MTF-derived MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the variability in, and discrepancy between, MPN and CFU measurements. We then compare our model to water quality samples analyzed using both MPN and CFU procedures, and find that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our results indicate that MPN and CFU intra-sample variability does not stem from human error or laboratory procedure variability, but is instead a simple consequence of the probabilistic basis for calculating the MPN. These results demonstrate how probabilistic models can be used to compare samples from different analytical procedures, and to determine whether transitions from one procedure to another are likely to cause a change in quality-based management decisions.

  15. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  16. Estimation of the probability of exposure to machining fluids in a population-based case-control study.

    PubMed

    Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who

  17. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  18. Probability Theory

    NASA Astrophysics Data System (ADS)

    Jaynes, E. T.; Bretthorst, G. Larry

    2003-04-01

    Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.

  19. Building vulnerability to hydro-geomorphic hazards: Estimating damage probability from qualitative vulnerability assessment using logistic regression

    NASA Astrophysics Data System (ADS)

    Ettinger, Susanne; Mounaud, Loïc; Magill, Christina; Yao-Lafourcade, Anne-Françoise; Thouret, Jean-Claude; Manville, Vern; Negulescu, Caterina; Zuccaro, Giulio; De Gregorio, Daniela; Nardone, Stefano; Uchuchoque, Juan Alexis Luque; Arguedas, Anita; Macedo, Luisa; Manrique Llerena, Nélida

    2016-10-01

    The focus of this study is an analysis of building vulnerability through investigating impacts from the 8 February 2013 flash flood event along the Avenida Venezuela channel in the city of Arequipa, Peru. On this day, 124.5 mm of rain fell within 3 h (monthly mean: 29.3 mm) triggering a flash flood that inundated at least 0.4 km2 of urban settlements along the channel, affecting more than 280 buildings, 23 of a total of 53 bridges (pedestrian, vehicle and railway), and leading to the partial collapse of sections of the main road, paralyzing central parts of the city for more than one week. This study assesses the aspects of building design and site specific environmental characteristics that render a building vulnerable by considering the example of a flash flood event in February 2013. A statistical methodology is developed that enables estimation of damage probability for buildings. The applied method uses observed inundation height as a hazard proxy in areas where more detailed hydrodynamic modeling data is not available. Building design and site-specific environmental conditions determine the physical vulnerability. The mathematical approach considers both physical vulnerability and hazard related parameters and helps to reduce uncertainty in the determination of descriptive parameters, parameter interdependency and respective contributions to damage. This study aims to (1) enable the estimation of damage probability for a certain hazard intensity, and (2) obtain data to visualize variations in damage susceptibility for buildings in flood prone areas. Data collection is based on a post-flood event field survey and the analysis of high (sub-metric) spatial resolution images (Pléiades 2012, 2013). An inventory of 30 city blocks was collated in a GIS database in order to estimate the physical vulnerability of buildings. As many as 1103 buildings were surveyed along the affected drainage and 898 buildings were included in the statistical analysis. Univariate and

  20. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  1. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  2. Estimating the probability of identity in a random dog population using 15 highly polymorphic canine STR markers.

    PubMed

    Eichmann, Cordula; Berger, Burkhard; Steinlechner, Martin; Parson, Walther

    2005-06-30

    Dog DNA-profiling is becoming an important supplementary technology for the investigation of accident and crime, as dogs are intensely integrated in human social life. We investigated 15 highly polymorphic canine STR markers and two sex-related markers of 131 randomly selected dogs from the area around Innsbruck, Tyrol, Austria, which were co-amplified in three PCR multiplex reactions (ZUBECA6, FH2132, FH2087Ua, ZUBECA4, WILMSTF, PEZ15, PEZ6, FH2611, FH2087Ub, FH2054, PEZ12, PEZ2, FH2010, FH2079 and VWF.X). Linkage testing for our set of marker suggested no evidence for linkage between the loci. Heterozygosity (HET), polymorphism information content (PIC) and the probability of identity (P((ID)theoretical), P((ID)unbiased), P((ID)sib)) were calculated for each marker. The HET((exp))-values of the 15 markers lie between 0.6 (VWF.X) and 0.9 (ZUBECA6), P((ID)sib)-values were found to range between 0.49 (VWF.X) and 0.28 (ZUBECA6). Moreover, the P((ID)sib) was computed for sets of loci by sequentially adding single loci to estimate the information content and the usefulness of the selected marker sets for the identification of dogs. The estimated P((ID)sib) value of all 15 markers amounted to 8.5 x 10(-8). The presented estimations turned out to be a helpful approach for a reasonable choice of markers for the individualisation of dogs.

  3. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-08-04

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  4. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal

  5. Estimating debris-flow probability using fan stratigraphy, historic records, and drainage-basin morphology, Interstate 70 highway corridor, central Colorado, U.S.A

    USGS Publications Warehouse

    Coe, J.A.; Godt, J.W.; Parise, M.; Moscariello, A.; ,

    2003-01-01

    We have used stratigraphic and historic records of debris-flows to estimate mean recurrence intervals of past debris-flow events on 19 fans along the Interstate 70 highway corridor in the Front Range of Colorado. Estimated mean recurrence intervals were used in the Poisson probability model to estimate the probability of future debris-flow events on the fans. Mean recurrence intervals range from 7 to about 2900 years. Annual probabilities range from less than 0.1% to about 13%. A regression analysis of mean recurrence interval data and drainage-basin morphometry yields a regression model that may be suitable to estimate mean recurrence intervals on fans with no stratigraphic or historic records. Additional work is needed to verify this model. ?? 2003 Millpress.

  6. Developing a Methodology for Eliciting Subjective Probability Estimates During Expert Evaluations of Safety Interventions: Application for Bayesian Belief Networks

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.a

    2005-01-01

    The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.

  7. Improving the estimation of detection probability and magnitude of completeness in strongly heterogeneous media, an application to acoustic emission (AE)

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Samira; Cesca, Simone; Hainzl, Sebastian; Kaiser, Diethelm; Becker, Dirk; Dahm, Torsten

    2013-06-01

    Reliable estimations of magnitude of completeness (Mc) are essential for a correct interpretation of seismic catalogues. The spatial distribution of Mc may be strongly variable and difficult to assess in mining environments, owing to the presence of galleries, cavities, fractured regions, porous media and different mineralogical bodies, as well as in consequence of inhomogeneous spatial distribution of the seismicity. We apply a 3-D modification of the probabilistic magnitude of completeness (PMC) method, which relies on the analysis of network detection capabilities. In our approach, the probability to detect an event depends on its magnitude, source-receiver Euclidian distance and source-receiver direction. The suggested method is proposed for study of the spatial distribution of the magnitude of completeness in a mining environment and here is applied to a 2-months acoustic emission (AE) data set recorded at the Morsleben salt mine, Germany. The dense seismic network and the large data set, which includes more than one million events, enable a detailed testing of the method. This method is proposed specifically for strongly heterogeneous media. Besides, it can also be used for specific network installations, with sensors with a sensitivity, dependent on the direction of the incoming wave (e.g. some piezoelectric sensors). In absence of strong heterogeneities, the standards PMC approach should be used. We show that the PMC estimations in mines strongly depend on the source-receiver direction, and cannot be correctly accounted using a standard PMC approach. However, results can be improved, when adopting the proposed 3-D modification of the PMC method. Our analysis of one central horizontal and vertical section yields a magnitude of completeness of about Mc ≈ 1 (AE magnitude) at the centre of the network, which increases up to Mc ≈ 4 at further distances outside the network; the best detection performance is estimated for a NNE-SSE elongated region, which

  8. Model approach to estimate the probability of accepting a lot of heterogeneously contaminated powdered food using different sampling strategies.

    PubMed

    Valero, Antonio; Pasquali, Frédérique; De Cesare, Alessandra; Manfreda, Gerardo

    2014-08-01

    Current sampling plans assume a random distribution of microorganisms in food. However, food-borne pathogens are estimated to be heterogeneously distributed in powdered foods. This spatial distribution together with very low level of contaminations raises concern of the efficiency of current sampling plans for the detection of food-borne pathogens like Cronobacter and Salmonella in powdered foods such as powdered infant formula or powdered eggs. An alternative approach based on a Poisson distribution of the contaminated part of the lot (Habraken approach) was used in order to evaluate the probability of falsely accepting a contaminated lot of powdered food when different sampling strategies were simulated considering variables such as lot size, sample size, microbial concentration in the contaminated part of the lot and proportion of contaminated lot. The simulated results suggest that a sample size of 100g or more corresponds to the lower number of samples to be tested in comparison with sample sizes of 10 or 1g. Moreover, the number of samples to be tested greatly decrease if the microbial concentration is 1CFU/g instead of 0.1CFU/g or if the proportion of contamination is 0.05 instead of 0.01. Mean contaminations higher than 1CFU/g or proportions higher than 0.05 did not impact on the number of samples. The Habraken approach represents a useful tool for risk management in order to design a fit-for-purpose sampling plan for the detection of low levels of food-borne pathogens in heterogeneously contaminated powdered food. However, it must be outlined that although effective in detecting pathogens, these sampling plans are difficult to be applied since the huge number of samples that needs to be tested. Sampling does not seem an effective measure to control pathogens in powdered food.

  9. Realistic Sensor Tasking Strategies

    NASA Astrophysics Data System (ADS)

    Frueh, C.; Fiedler, H.; Herzog, J.

    2016-09-01

    Efficient sensor tasking is a crucial step in building up and maintaining a catalog of space objects at the highest possible orbit quality. Sensor resources are limited; sensor location and setup (hardware and processing software) influence the quality of observations for initial orbit determination or orbit improvement that can be obtained. Furthermore, improved sensing capabilities are expected to lead to an increase of objects that are sought to be maintained in a catalog, easily reaching over 100'000 objects. Sensor tasking methods hence need to be computationally efficient in order to be successfully applied to operational systems, and need to take realistic constraints, such as limited visibility of objects, time-varying probability of detection and the specific capabilities in software and hardware for the specific sensors into account. This paper shows a method to formulate sensor tasking as an optimization problem and introduces a new method to provide fast and computationally efficient real time, near optimal sensor tasking solutions. Simulations are preformed using the USSTRATCOM TLE catalog of all geosynchronous objects. The results are compared to state of the art observation strategies.

  10. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach<probability obtained with the gradient stochastic approach≤probability predicted by Davis and Stoll < probability predicted by Martin et al. The differences are explained by the positive bias of the Martin equation and the lower average resolution observed for the isocratic simulations compared to the gradient simulations with the same peak capacity. When the stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, <1.5% error for saturation factors <0.20. Additional applications for the stochastic approach include isothermal and programmed-temperature gas chromatography.

  11. Converged three-dimensional quantum mechanical reaction probabilities for the F + H2 reaction on a potential energy surface with realistic entrance and exit channels and comparisons to results for three other surfaces

    NASA Technical Reports Server (NTRS)

    Lynch, Gillian C.; Halvick, Philippe; Zhao, Meishan; Truhlar, Donald G.; Yu, Chin-Hui; Kouri, Donald J.; Schwenke, David W.

    1991-01-01

    Accurate three-dimensional quantum mechanical reaction probabilities are presented for the reaction F + H2 yields HF + H on the new global potential energy surface 5SEC for total angular momentum J = 0 over a range of translational energies from 0.15 to 4.6 kcal/mol. It is found that the v-prime = 3 HF vibrational product state has a threshold as low as for v-prime = 2.

  12. Estimating present day extreme water level exceedance probabilities around the coastline of Australia: tides, extra-tropical storm surges and mean sea level

    NASA Astrophysics Data System (ADS)

    Haigh, Ivan D.; Wijeratne, E. M. S.; MacPherson, Leigh R.; Pattiaratchi, Charitha B.; Mason, Matthew S.; Crompton, Ryan P.; George, Steve

    2014-01-01

    The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical

  13. Simulation Study of Estimators for the Survival Probability of a First Passage Time for a Semi-Markov Process Using Censored Data

    DTIC Science & Technology

    1988-09-01

    Finite state space semi-Markov process find application in many areas. Often interest centers on whether or not the process has hit a particular state before a time t. This thesis reports results of a simulation study of the small behavior for three estimators of the survival probability of a first passage time for a semi-Markov process using censored data. Keywords: Semi- Markov; Kaplan Meier estimator; Confidence interval ; Jackknife; Problem; Theses.

  14. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  15. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies.

    PubMed

    Austin, Peter C; Stuart, Elizabeth A

    2015-12-10

    The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data.

  16. Hate Crimes and Stigma-Related Experiences among Sexual Minority Adults in the United States: Prevalence Estimates from a National Probability Sample

    ERIC Educational Resources Information Center

    Herek, Gregory M.

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or…

  17. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  18. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  19. Estimates of mean consequences and confidence bounds on the mean associated with low-probability seismic events in total system performance assessments

    SciTech Connect

    Pensado, Osvaldo; Mancillas, James

    2007-07-01

    An approach is described to estimate mean consequences and confidence bounds on the mean of seismic events with low probability of breaching components of the engineered barrier system. The approach is aimed at complementing total system performance assessment models used to understand consequences of scenarios leading to radionuclide releases in geologic nuclear waste repository systems. The objective is to develop an efficient approach to estimate mean consequences associated with seismic events of low probability, employing data from a performance assessment model with a modest number of Monte Carlo realizations. The derived equations and formulas were tested with results from a specific performance assessment model. The derived equations appear to be one method to estimate mean consequences without having to use a large number of realizations. (authors)

  20. A comparison of conventional capture versus PIT reader techniques for estimating survival and capture probabilities of big brown bats (Eptesicus fuscus)

    USGS Publications Warehouse

    Ellison, L.E.; O'Shea, T.J.; Neubaum, D.J.; Neubaum, M.A.; Pearce, R.D.; Bowen, R.A.

    2007-01-01

    We compared conventional capture (primarily mist nets and harp traps) and passive integrated transponder (PIT) tagging techniques for estimating capture and survival probabilities of big brown bats (Eptesicus fuscus) roosting in buildings in Fort Collins, Colorado. A total of 987 female adult and juvenile bats were captured and marked by subdermal injection of PIT tags during the summers of 2001-2005 at five maternity colonies in buildings. Openings to roosts were equipped with PIT hoop-style readers, and exit and entry of bats were passively monitored on a daily basis throughout the summers of 2002-2005. PIT readers 'recaptured' adult and juvenile females more often than conventional capture events at each roost. Estimates of annual capture probabilities for all five colonies were on average twice as high when estimated from PIT reader data (P?? = 0.93-1.00) than when derived from conventional techniques (P?? = 0.26-0.66), and as a consequence annual survival estimates were more precisely estimated when using PIT reader encounters. Short-term, daily capture estimates were also higher using PIT readers than conventional captures. We discuss the advantages and limitations of using PIT tags and passive encounters with hoop readers vs. conventional capture techniques for estimating these vital parameters in big brown bats. ?? Museum and Institute of Zoology PAS.

  1. A joint probability approach using a 1-D hydrodynamic model for estimating high water level frequencies in the Lower Rhine Delta

    NASA Astrophysics Data System (ADS)

    Zhong, H.; van Overloop, P.-J.; van Gelder, P. H. A. J. M.

    2013-07-01

    The Lower Rhine Delta, a transitional area between the River Rhine and Meuse and the North Sea, is at risk of flooding induced by infrequent events of a storm surge or upstream flooding, or by more infrequent events of a combination of both. A joint probability analysis of the astronomical tide, the wind induced storm surge, the Rhine flow and the Meuse flow at the boundaries is established in order to produce the joint probability distribution of potential flood events. Three individual joint probability distributions are established corresponding to three potential flooding causes: storm surges and normal Rhine discharges, normal sea levels and high Rhine discharges, and storm surges and high Rhine discharges. For each category, its corresponding joint probability distribution is applied, in order to stochastically simulate a large number of scenarios. These scenarios can be used as inputs to a deterministic 1-D hydrodynamic model in order to estimate the high water level frequency curves at the transitional locations. The results present the exceedance probability of the present design water level for the economically important cities of Rotterdam and Dordrecht. The calculated exceedance probability is evaluated and compared to the governmental norm. Moreover, the impact of climate change on the high water level frequency curves is quantified for the year 2050 in order to assist in decisions regarding the adaptation of the operational water management system and the flood defense system.

  2. Estimation of the age-specific per-contact probability of Ebola virus transmission in Liberia using agent-based simulations

    NASA Astrophysics Data System (ADS)

    Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios

    2016-06-01

    Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.

  3. What makes a message real? The effects of perceived realism of alcohol- and drug-related messages on personal probability estimation.

    PubMed

    Cho, Hyunyi; Shen, Lijiang; Wilson, Kari M

    2013-03-01

    Perceived lack of realism in alcohol advertising messages promising positive outcomes and antialcohol and antidrug messages portraying negative outcomes of alcohol consumption has been a cause for public health concern. This study examined the effects of perceived realism dimensions on personal probability estimation through identification and message minimization. Data collected from college students in U.S. Midwest in 2010 (N = 315) were analyzed with multilevel structural equation modeling. Plausibility and narrative consistency mitigated message minimization, but they did not influence identification. Factuality and perceptual quality influenced both message minimization and identification, but their effects were smaller than those of typicality. Typicality was the strongest predictor of probability estimation. Implications of the results and suggestions for future research are provided.

  4. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  5. Estimating species phylogeny from gene-tree probabilities despite incomplete lineage sorting: an example from Melanoplus grasshoppers.

    PubMed

    Carstens, Bryan C; Knowles, L Lacey

    2007-06-01

    Estimating phylogenetic relationships among closely related species can be extremely difficult when there is incongruence among gene trees and between the gene trees and the species tree. Here we show that incorporating a model of the stochastic loss of gene lineages by genetic drift into the phylogenetic estimation procedure can provide a robust estimate of species relationships, despite widespread incomplete sorting of ancestral polymorphism. This approach is applied to a group of montane Melanoplus grasshoppers for which genealogical discordance among loci and incomplete lineage sorting obscures any obvious phylogenetic relationships among species. Unlike traditional treatments where gene trees estimated using standard phylogenetic methods are implicitly equated with the species tree, with the coalescent-based approach the species tree is modeled probabilistically from the estimated gene trees. The estimated species phylogeny (the ESP) is calculated for the grasshoppers from multiple gene trees reconstructed for nuclear loci and a mitochondrial gene. This empirical application is coupled with a simulation study to explore the performance of the coalescent-based approach. Specifically, we test the accuracy of the ESP given the data based on analyses of simulated data matching the multilocus data collected in Melanoplus (i.e., data were simulated for each locus with the same number of base pairs and locus-specific mutational models). The results of the study show that ESPs can be computed using the coalescent-based approach long before reciprocal monophyly has been achieved, and that these statistical estimates are accurate. This contrasts with analyses of the empirical data collected in Melanoplus and simulated data based on concatenation of multiple loci, for which the incomplete lineage sorting of recently diverged species posed significant problems. The strengths and potential challenges associated with incorporating an explicit model of gene

  6. Estimating the Per-Contact Probability of Infection by Highly Pathogenic Avian Influenza (H7N7) Virus during the 2003 Epidemic in The Netherlands

    PubMed Central

    Ssematimba, Amos; Elbers, Armin R. W.; Hagenaars, Thomas J.; de Jong, Mart C. M.

    2012-01-01

    Estimates of the per-contact probability of transmission between farms of Highly Pathogenic Avian Influenza virus of H7N7 subtype during the 2003 epidemic in the Netherlands are important for the design of better control and biosecurity strategies. We used standardized data collected during the epidemic and a model to extract data for untraced contacts based on the daily number of infectious farms within a given distance of a susceptible farm. With these data, we used a maximum likelihood estimation approach to estimate the transmission probabilities by the individual contact types, both traced and untraced. The estimated conditional probabilities, conditional on the contact originating from an infectious farm, of virus transmission were: 0.000057 per infectious farm within 1 km per day, 0.000413 per infectious farm between 1 and 3 km per day, 0.0000895 per infectious farm between 3 and 10 km per day, 0.0011 per crisis organisation contact, 0.0414 per feed delivery contact, 0.308 per egg transport contact, 0.133 per other-professional contact and, 0.246 per rendering contact. We validate these outcomes against literature data on virus genetic sequences for outbreak farms. These estimates can be used to inform further studies on the role that improved biosecurity between contacts and/or contact frequency reduction can play in eliminating between-farm spread of the virus during future epidemics. The findings also highlight the need to; 1) understand the routes underlying the infections without traced contacts and, 2) to review whether the contact-tracing protocol is exhaustive in relation to all the farm’s day-to-day activities and practices. PMID:22808285

  7. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for

  8. Realistic collective nuclear Hamiltonian

    NASA Astrophysics Data System (ADS)

    Dufour, Marianne; Zuker, Andrés P.

    1996-10-01

    The residual part of the realistic forces-obtained after extracting the monopole terms responsible for bulk properties-is strongly dominated by pairing and quadrupole interactions, with important στ.στ, octupole, and hexadecapole contributions. Their forms retain the simplicity of the traditional pairing plus multipole models, while eliminating their flaws through a normalization mechanism dictated by a universal A-1/3 scaling. Coupling strengths and effective charges are calculated and shown to agree with empirical values. Comparisons between different realistic interactions confirm the claim that they are very similar.

  9. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  10. Estimation of phase signal change in neuronal current MRI for evoke response of tactile detection with realistic somatosensory laminar network model.

    PubMed

    BagheriMofidi, Seyed Mehdi; Pouladian, Majid; Jameie, Seyed Behnamedin; Abbaspour Tehrani-Fard, Ali

    2016-09-01

    Magnetic field generated by neuronal activity could alter magnetic resonance imaging (MRI) signals but detection of such signal is under debate. Previous researches proposed that magnitude signal change is below current detectable level, but phase signal change (PSC) may be measurable with current MRI systems. Optimal imaging parameters like echo time, voxel size and external field direction, could increase the probability of detection of this small signal change. We simulate a voxel of cortical column to determine effect of such parameters on PSC signal. We extended a laminar network model for somatosensory cortex to find neuronal current in each segment of pyramidal neurons (PN). 60,000 PNs of simulated network were positioned randomly in a voxel. Biot-savart law applied to calculate neuronal magnetic field and additional phase. The procedure repeated for eleven neuronal arrangements in the voxel. PSC signal variation with the echo time and voxel size was assessed. The simulated results show that PSC signal increases with echo time, especially 100/80 ms after stimulus for gradient echo/spin echo sequence. It can be up to 0.1 mrad for echo time = 175 ms and voxel size = 1.48 × 1.48 × 2.18 mm(3). With echo time less than 25 ms after stimulus, it was just acquired effects of physiological noise on PSC signal. The absolute value of the signal increased with decrease of voxel size, but its components had complex variation. External field orthogonal to local surface of cortex maximizes the signal. Expected PSC signal for tactile detection in the somatosensory cortex increase with echo time and have no oscillation.

  11. Realistic and Schematic Visuals.

    ERIC Educational Resources Information Center

    Heuvelman, Ard

    1996-01-01

    A study examined three different visual formats (studio presenter only, realistic visuals, or schematic visuals) of an educational television program. Recognition and recall of the abstract subject matter were measured in 101 adult viewers directly after the program and 3 months later. The schematic version yielded better recall of the program,…

  12. A Review of Mycotoxins in Food and Feed Products in Portugal and Estimation of Probable Daily Intakes.

    PubMed

    Abrunhosa, Luís; Morales, Héctor; Soares, Célia; Calado, Thalita; Vila-Chã, Ana Sofia; Pereira, Martinha; Venâncio, Armando

    2016-01-01

    Mycotoxins are toxic secondary metabolites produced by filamentous fungi that occur naturally in agricultural commodities worldwide. Aflatoxins, ochratoxin A, patulin, fumonisins, zearalenone, trichothecenes, and ergot alkaloids are presently the most important for food and feed safety. These compounds are produced by several species that belong to the Aspergillus, Penicillium, Fusarium, and Claviceps genera and can be carcinogenic, mutagenic, teratogenic, cytotoxic, neurotoxic, nephrotoxic, estrogenic, and immunosuppressant. Human and animal exposure to mycotoxins is generally assessed by taking into account data on the occurrence of mycotoxins in food and feed as well as data on the consumption patterns of the concerned population. This evaluation is crucial to support measures to reduce consumer exposure to mycotoxins. This work reviews the occurrence and levels of mycotoxins in Portuguese food and feed to provide a global overview of this issue in Portugal. With the information collected, the exposure of the Portuguese population to those mycotoxins is assessed, and the estimated dietary intakes are presented.

  13. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has "hung-up." That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down post studs experiencing a "hang-up." The results af loads analyses performed for four (4) stud-hang ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  14. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-Down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has hung up. That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down experiencing a "hang-up". The results of loads analyses performed for (4) stud hang-ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  15. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  16. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    PubMed

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  17. On the consideration of scaling properties of extreme rainfall in Madrid (Spain) for developing a generalized intensity-duration-frequency equation and assessing probable maximum precipitation estimates

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel

    2016-11-01

    The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor (k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series.

  18. Estimating the probability distribution of the incubation period for rabies using data from the 1948-1954 rabies epidemic in Tokyo.

    PubMed

    Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K

    2016-01-01

    Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs.

  19. Estimating reach-specific fish movement probabilities in rivers with a Bayesian state-space model: application to sea lamprey passage and capture at dams

    USGS Publications Warehouse

    Holbrook, Christopher M.; Johnson, Nicholas S.; Steibel, Juan P.; Twohey, Michael B.; Binder, Thomas R.; Krueger, Charles C.; Jones, Michael L.

    2014-01-01

    Improved methods are needed to evaluate barriers and traps for control and assessment of invasive sea lamprey (Petromyzon marinus) in the Great Lakes. A Bayesian state-space model provided reach-specific probabilities of movement, including trap capture and dam passage, for 148 acoustic tagged invasive sea lamprey in the lower Cheboygan River, Michigan, a tributary to Lake Huron. Reach-specific movement probabilities were combined to obtain estimates of spatial distribution and abundance needed to evaluate a barrier and trap complex for sea lamprey control and assessment. Of an estimated 21 828 – 29 300 adult sea lampreys in the river, 0%–2%, or 0–514 untagged lampreys, could have passed upstream of the dam, and 46%–61% were caught in the trap. Although no tagged lampreys passed above the dam (0/148), our sample size was not sufficient to consider the lock and dam a complete barrier to sea lamprey. Results also showed that existing traps are in good locations because 83%–96% of the population was vulnerable to existing traps. However, only 52%–69% of lampreys vulnerable to traps were caught, suggesting that traps can be improved. The approach used in this study was a novel use of Bayesian state-space models that may have broader applications, including evaluation of barriers for other invasive species (e.g., Asian carp (Hypophthalmichthys spp.)) and fish passage structures for other diadromous fishes.

  20. Estimated probability of becoming a case of drug dependence in relation to duration of drug-taking experience: a functional analysis approach.

    PubMed

    Vsevolozhskaya, Olga A; Anthony, James C

    2016-06-29

    Measured as elapsed time from first use to dependence syndrome onset, the estimated "induction interval" for cocaine is thought to be short relative to the cannabis interval, but little is known about risk of becoming dependent during first months after onset of use. Virtually all published estimates for this facet of drug dependence epidemiology are from life histories elicited years after first use. To improve estimation, we turn to new month-wise data from nationally representative samples of newly incident drug users identified via probability sampling and confidential computer-assisted self-interviews for the United States National Surveys on Drug Use and Health, 2004-2013. Standardized modules assessed first and most recent use, and dependence syndromes, for each drug subtype. A four-parameter Hill function depicts the drug dependence transition for subgroups defined by units of elapsed time from first to most recent use, with an expectation of greater cocaine dependence transitions for cocaine versus cannabis. This study's novel estimates for cocaine users one month after first use show 2-4% with cocaine dependence; 12-17% are dependent when use has persisted. Corresponding cannabis estimates are 0-1% after one month, but 10-23% when use persists. Duration or persistence of cannabis smoking beyond an initial interval of a few months of use seems to be a signal of noteworthy risk for, or co-occurrence of, rapid-onset cannabis dependence, not too distant from cocaine estimates, when we sort newly incident users into subgroups defined by elapsed time from first to most recent use. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  2. Probability estimates of heavy precipitation events in a flood-prone central-European region with enhanced influence of Mediterranean cyclones

    NASA Astrophysics Data System (ADS)

    Kysely, J.; Picek, J.

    2007-07-01

    Due to synoptic-climatological reasons as well as a specific configuration of mountain ranges, the northeast part of the Czech Republic is an area with an enhanced influence of low-pressure systems of the Mediterranean origin. They are associated with an upper-level advection of warm and moist air and often lead to heavy precipitation events. Particularities of this area are evaluated using a regional frequency analysis. The northeast region is identified as a homogeneous one according to tests on statistical characteristics of precipitation extremes (annual maxima of 1- to 7-day amounts), and observed distributions follow a different model compared to the surrounding area. Noteworthy is the heavy tail of distributions of multi-day events, reflected also in inapplicability of the L-moment estimators for the general 4-parameter kappa distribution utilized in Monte Carlo simulations in regional homogeneity and goodness-of-fit tests. We overcome this issue by using the maximum likelihood estimation. The Generalized Logistic distribution is identified as the most suitable one for modelling annual maxima; advantages of the regional over local approach to the frequency analysis consist mainly in reduced uncertainty of the growth curves and design value estimates. The regional growth curves are used to derive probabilities of recurrence of recent heavy precipitation events associated with major floods in the Odra river basin.

  3. Cell survival fraction estimation based on the probability densities of domain and cell nucleus specific energies using improved microdosimetric kinetic models.

    PubMed

    Sato, Tatsuhiko; Furusawa, Yoshiya

    2012-10-01

    Estimation of the survival fractions of cells irradiated with various particles over a wide linear energy transfer (LET) range is of great importance in the treatment planning of charged-particle therapy. Two computational models were developed for estimating survival fractions based on the concept of the microdosimetric kinetic model. They were designated as the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models. The former model takes into account the stochastic natures of both domain and cell nucleus specific energies, whereas the latter model represents the stochastic nature of domain specific energy by its approximated mean value and variance to reduce the computational time. The probability densities of the domain and cell nucleus specific energies are the fundamental quantities for expressing survival fractions in these models. These densities are calculated using the microdosimetric and LET-estimator functions implemented in the Particle and Heavy Ion Transport code System (PHITS) in combination with the convolution or database method. Both the double-stochastic microdosimetric kinetic and stochastic microdosimetric kinetic models can reproduce the measured survival fractions for high-LET and high-dose irradiations, whereas a previously proposed microdosimetric kinetic model predicts lower values for these fractions, mainly due to intrinsic ignorance of the stochastic nature of cell nucleus specific energies in the calculation. The models we developed should contribute to a better understanding of the mechanism of cell inactivation, as well as improve the accuracy of treatment planning of charged-particle therapy.

  4. A realistic lattice example

    SciTech Connect

    Courant, E.D.; Garren, A.A.

    1985-10-01

    A realistic, distributed interaction region (IR) lattice has been designed that includes new components discussed in the June 1985 lattice workshop. Unlike the test lattices, the lattice presented here includes utility straights and the mechanism for crossing the beams in the experimental straights. Moreover, both the phase trombones and the dispersion suppressors contain the same bending as the normal cells. Vertically separated beams and 6 Tesla, 1-in-1 magnets are assumed. Since the cells are 200 meters long, and have 60 degree phase advance, this lattice has been named RLD1, in analogy with the corresponding test lattice, TLD1. The quadrupole gradient is 136 tesla/meter in the cells, and has similar values in other quadrupoles except in those in the IR`s, where the maximum gradient is 245 tesla/meter. RLD1 has distributed IR`s; however, clustered realistic lattices can easily be assembled from the same components, as was recently done in a version that utilizes the same type of experimental and utility straights as those of RLD1.

  5. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    -SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.

  6. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  7. Hate crimes and stigma-related experiences among sexual minority adults in the United States: prevalence estimates from a national probability sample.

    PubMed

    Herek, Gregory M

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.

  8. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  9. Estimated Probability of Post-Wildfire Debris-Flow Occurrence and Estimated Volume of Debris Flows from a Pre-Fire Analysis in the Three Lakes Watershed, Grand County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Bossong, Clifford R.; Litke, David W.; Viger, Roland J.; Rupert, Michael G.; Char, Stephen J.

    2008-01-01

    Debris flows pose substantial threats to life, property, infrastructure, and water resources. Post-wildfire debris flows may be of catastrophic proportions compared to debris flows occurring in unburned areas. During 2006, the U.S. Geological Survey (USGS), in cooperation with the Northern Colorado Water Conservancy District, initiated a pre-wildfire study to determine the potential for post-wildfire debris flows in the Three Lakes watershed, Grand County, Colorado. The objective was to estimate the probability of post-wildfire debris flows and to estimate the approximate volumes of debris flows from 109 subbasins in the Three Lakes watershed in order to provide the Northern Colorado Water Conservancy District with a relative measure of which subbasins might constitute the most serious debris flow hazards. This report describes the results of the study and provides estimated probabilities of debris-flow occurrence and the estimated volumes of debris flow that could be produced in 109 subbasins of the watershed under an assumed moderate- to high-burn severity of all forested areas. The estimates are needed because the Three Lakes watershed includes communities and substantial water-resources and water-supply infrastructure that are important to residents both east and west of the Continental Divide. Using information provided in this report, land and water-supply managers can consider where to concentrate pre-wildfire planning, pre-wildfire preparedness, and pre-wildfire mitigation in advance of wildfires. Also, in the event of a large wildfire, this information will help managers identify the watersheds with the greatest post-wildfire debris-flow hazards.

  10. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  11. Lexicographic Probability, Conditional Probability, and Nonstandard Probability

    DTIC Science & Technology

    2009-11-11

    the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U

  12. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.

  13. Elaboration of a clinical and paraclinical score to estimate the probability of herpes simplex virus encephalitis in patients with febrile, acute neurologic impairment.

    PubMed

    Gennai, S; Rallo, A; Keil, D; Seigneurin, A; Germi, R; Epaulard, O

    2016-06-01

    Herpes simplex virus (HSV) encephalitis is associated with a high risk of mortality and sequelae, and early diagnosis and treatment in the emergency department are necessary. However, most patients present with non-specific febrile, acute neurologic impairment; this may lead clinicians to overlook the diagnosis of HSV encephalitis. We aimed to identify which data collected in the first hours in a medical setting were associated with the diagnosis of HSV encephalitis. We conducted a multicenter retrospective case-control study in four French public hospitals from 2007 to 2013. The cases were the adult patients who received a confirmed diagnosis of HSV encephalitis. The controls were all the patients who attended the emergency department of Grenoble hospital with a febrile acute neurologic impairment, without HSV detection by polymerase chain reaction (PCR) in the cerebrospinal fluid (CSF), in 2012 and 2013. A multivariable logistic model was elaborated to estimate factors significantly associated with HSV encephalitis. Finally, an HSV probability score was derived from the logistic model. We identified 36 cases and 103 controls. Factors independently associated with HSV encephalitis were the absence of past neurological history (odds ratio [OR] 6.25 [95 % confidence interval (CI): 2.22-16.7]), the occurrence of seizure (OR 8.09 [95 % CI: 2.73-23.94]), a systolic blood pressure ≥140 mmHg (OR 5.11 [95 % CI: 1.77-14.77]), and a C-reactive protein <10 mg/L (OR 9.27 [95 % CI: 2.98-28.88]). An HSV probability score was calculated summing the value attributed to each independent factor. HSV encephalitis diagnosis may benefit from the use of this score based upon some easily accessible data. However, diagnostic evocation and probabilistic treatment must remain the rule.

  14. Confidence Probability versus Detection Probability

    SciTech Connect

    Axelrod, M

    2005-08-18

    In a discovery sampling activity the auditor seeks to vet an inventory by measuring (or inspecting) a random sample of items from the inventory. When the auditor finds every sample item in compliance, he must then make a confidence statement about the whole inventory. For example, the auditor might say: ''We believe that this inventory of 100 items contains no more than 5 defectives with 95% confidence.'' Note this is a retrospective statement in that it asserts something about the inventory after the sample was selected and measured. Contrast this to the prospective statement: ''We will detect the existence of more than 5 defective items in this inventory with 95% probability.'' The former uses confidence probability while the latter uses detection probability. For a given sample size, the two probabilities need not be equal, indeed they could differ significantly. Both these probabilities critically depend on the auditor's prior belief about the number of defectives in the inventory and how he defines non-compliance. In other words, the answer strongly depends on how the question is framed.

  15. Co-activation Probability Estimation (CoPE): An approach for modeling functional co-activation architecture based on neuroimaging coordinates

    PubMed Central

    Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R.; Liu, Yong; Yang, Yong; Eickhoff, Simon B.; Jiang, Tianzi

    2016-01-01

    Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a ‘core’ co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052

  16. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  17. First estimates of the probability of survival in a small-bodied, high-elevation frog (Boreal Chorus Frog, Pseudacris maculata), or how historical data can be useful

    USGS Publications Warehouse

    Muths, Erin L.; Scherer, R. D.; Amburgey, S. M.; Matthews, T.; Spencer, A. W.; Corn, P.S.

    2016-01-01

    In an era of shrinking budgets yet increasing demands for conservation, the value of existing (i.e., historical) data are elevated. Lengthy time series on common, or previously common, species are particularly valuable and may be available only through the use of historical information. We provide first estimates of the probability of survival and longevity (0.67–0.79 and 5–7 years, respectively) for a subalpine population of a small-bodied, ostensibly common amphibian, the Boreal Chorus Frog (Pseudacris maculata (Agassiz, 1850)), using historical data and contemporary, hypothesis-driven information–theoretic analyses. We also test a priori hypotheses about the effects of color morph (as suggested by early reports) and of drought (as suggested by recent climate predictions) on survival. Using robust mark–recapture models, we find some support for early hypotheses regarding the effect of color on survival, but we find no effect of drought. The congruence between early findings and our analyses highlights the usefulness of historical information in providing raw data for contemporary analyses and context for conservation and management decisions.

  18. Probability Estimates of Solar Particle Event Doses During a Period of Low Sunspot Number for Thinly-Shielded Spacecraft and Short Duration Missions

    NASA Technical Reports Server (NTRS)

    Atwell, William; Tylka, Allan J.; Dietrich, William; Rojdev, Kristina; Matzkind, Courtney

    2016-01-01

    In an earlier paper (Atwell, et al., 2015), we investigated solar particle event (SPE) radiation exposures (absorbed dose) to small, thinly-shielded spacecraft during a period when the sunspot number (SSN) was less than 30. These SPEs contain Ground Level Events (GLE), sub-GLEs, and sub-sub-GLEs (Tylka and Dietrich, 2009, Tylka and Dietrich, 2008, and Atwell, et al., 2008). GLEs are extremely energetic solar particle events having proton energies extending into the several GeV range and producing secondary particles in the atmosphere, mostly neutrons, observed with ground station neutron monitors. Sub-GLE events are less energetic, extending into the several hundred MeV range, but do not produce secondary atmospheric particles. Sub-sub GLEs are even less energetic with an observable increase in protons at energies greater than 30 MeV, but no observable proton flux above 300 MeV. In this paper, we consider those SPEs that occurred during 1973-2010 when the SSN was greater than 30 but less than 50. In addition, we provide probability estimates of absorbed dose based on mission duration with a 95% confidence level (CL). We also discuss the implications of these data and provide some recommendations that may be useful to spacecraft designers of these smaller spacecraft.

  19. Incorporating a Process-Based Land Use Variable into Species- Distribution Modelling and an Estimated Probability of Species Occurrence Into a Land Change Model: A Case of Albania

    NASA Astrophysics Data System (ADS)

    Laze, Kuenda

    2016-08-01

    Modelling of land use may be improved by incorporating the results of species distribution modelling and species distribution modelling may be upgraded if a variable of the process-based variable of forest cover change or accessibility of forest from human settlement is included. This work presents the results of spatially explicit analyses of the changes in forest cover from 2000 to 2007 using the method of Geographically Weighted Regression (GWR) and of the species distribution for protected species of Lynx lynx martinoi, Ursus arctos using Generalized Linear Models (GLMs). The methodological approach is separately searching for a parsimonious model for forest cover change and species distribution for the entire territory of Albania. The findings of this work show that modelling of land change and of species distribution is indeed value-added by showing higher values of model selection of corrected Akaike Information Criterion. These results provide evidences on the effects of process-based variables on species distribution modelling and on the performance of species distribution modelling as well as show an example of the incorporation of estimated probability of species occurrences in a land change modelling.

  20. GIS-based estimation of the winter storm damage probability in forests: a case study from Baden-Wuerttemberg (Southwest Germany).

    PubMed

    Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen; Kohnle, Ulrich

    2012-01-01

    Data on storm damage attributed to the two high-impact winter storms 'Wiebke' (28 February 1990) and 'Lothar' (26 December 1999) were used for GIS-based estimation and mapping (in a 50 × 50 m resolution grid) of the winter storm damage probability (P(DAM)) for the forests of the German federal state of Baden-Wuerttemberg (Southwest Germany). The P(DAM)-calculation was based on weights of evidence (WofE) methodology. A combination of information on forest type, geology, soil type, soil moisture regime, and topographic exposure, as well as maximum gust wind speed field was used to compute P(DAM) across the entire study area. Given the condition that maximum gust wind speed during the two storm events exceeded 35 m s(-1), the highest P(DAM) values computed were primarily where coniferous forest grows in severely exposed areas on temporarily moist soils on bunter sandstone formations. Such areas are found mainly in the mountainous ranges of the northern Black Forest, the eastern Forest of Odes, in the Virngrund area, and in the southwestern Alpine Foothills.

  1. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.

  2. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  3. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  4. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  5. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  6. Guide star probabilities

    NASA Technical Reports Server (NTRS)

    Soneira, R. M.; Bahcall, J. N.

    1981-01-01

    Probabilities are calculated for acquiring suitable guide stars (GS) with the fine guidance system (FGS) of the space telescope. A number of the considerations and techniques described are also relevant for other space astronomy missions. The constraints of the FGS are reviewed. The available data on bright star densities are summarized and a previous error in the literature is corrected. Separate analytic and Monte Carlo calculations of the probabilities are described. A simulation of space telescope pointing is carried out using the Weistrop north galactic pole catalog of bright stars. Sufficient information is presented so that the probabilities of acquisition can be estimated as a function of position in the sky. The probability of acquiring suitable guide stars is greatly increased if the FGS can allow an appreciable difference between the (bright) primary GS limiting magnitude and the (fainter) secondary GS limiting magnitude.

  7. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  8. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  9. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    2003-01-01

    The T-year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T-year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at-site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100-year flood will occur on the average every 4,5 years.

  10. Rationalizing Hybrid Earthquake Probabilities

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Reasenberg, P.; Beeler, N.; Cocco, M.; Belardinelli, M.

    2003-12-01

    An approach to including stress transfer and frictional effects in estimates of the probability of failure of a single fault affected by a nearby earthquake has been suggested in Stein et al. (1997). This `hybrid' approach combines conditional probabilities, which depend on the time elapsed since the last earthquake on the affected fault, with Poissonian probabilities that account for friction and depend only on the time since the perturbing earthquake. The latter are based on the seismicity rate change model developed by Dieterich (1994) to explain the temporal behavior of aftershock sequences in terms of rate-state frictional processes. The model assumes an infinite population of nucleation sites that are near failure at the time of the perturbing earthquake. In the hybrid approach, assuming the Dieterich model can lead to significant transient increases in failure probability. We explore some of the implications of applying the Dieterich model to a single fault and its impact on the hybrid probabilities. We present two interpretations that we believe can rationalize the use of the hybrid approach. In the first, a statistical distribution representing uncertainties in elapsed and/or mean recurrence time on the fault serves as a proxy for Dieterich's population of nucleation sites. In the second, we imagine a population of nucleation patches distributed over the fault with a distribution of maturities. In both cases we find that the probability depends on the time since the last earthquake. In particular, the size of the transient probability increase may only be significant for faults already close to failure. Neglecting the maturity of a fault may lead to overestimated rate and probability increases.

  11. Estimating allele dropout probabilities by logistic regression: Assessments using Applied Biosystems 3500xL and 3130xl Genetic Analyzers with various commercially available human identification kits.

    PubMed

    Inokuchi, Shota; Kitayama, Tetsushi; Fujii, Koji; Nakahara, Hiroaki; Nakanishi, Hiroaki; Saito, Kazuyuki; Mizuno, Natsuko; Sekiguchi, Kazumasa

    2016-03-01

    Phenomena called allele dropouts are often observed in crime stain profiles. Allele dropouts are generated because one of a pair of heterozygous alleles is underrepresented by stochastic influences and is indicated by a low peak detection threshold. Therefore, it is important that such risks are statistically evaluated. In recent years, attempts to interpret allele dropout probabilities by logistic regression using the information on peak heights have been reported. However, these previous studies are limited to the use of a human identification kit and fragment analyzer. In the present study, we calculated allele dropout probabilities by logistic regression using contemporary capillary electrophoresis instruments, 3500xL Genetic Analyzer and 3130xl Genetic Analyzer with various commercially available human identification kits such as AmpFℓSTR® Identifiler® Plus PCR Amplification Kit. Furthermore, the differences in logistic curves between peak detection thresholds using analytical threshold (AT) and values recommended by the manufacturer were compared. The standard logistic curves for calculating allele dropout probabilities from the peak height of sister alleles were characterized. The present study confirmed that ATs were lower than the values recommended by the manufacturer in human identification kits; therefore, it is possible to reduce allele dropout probabilities and obtain more information using AT as the peak detection threshold.

  12. DETERMINING TYPE Ia SUPERNOVA HOST GALAXY EXTINCTION PROBABILITIES AND A STATISTICAL APPROACH TO ESTIMATING THE ABSORPTION-TO-REDDENING RATIO R{sub V}

    SciTech Connect

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine

    2016-03-10

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.

  13. Determining Type Ia Supernova Host Galaxy Extinction Probabilities and a Statistical Approach to Estimating the Absorption-to-reddening Ratio RV

    NASA Astrophysics Data System (ADS)

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine

    2016-03-01

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B - V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, RV, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B - V) with RV = 3.1 and investigate the color excess probabilities E(B - V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa-Sap, Sab-Sbp, Sbc-Scp, Scd-Sdm, S0, and irregular galaxy classes as a function of R/R25. We find that the largest expected reddening probabilities are in Sab-Sb and Sbc-Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio RV using color excess probability functions and find values of RV = 2.71 ± 1.58 for 21 SNe Ia observed in Sab-Sbp galaxies, and RV = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc-Scp galaxies.

  14. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  15. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  16. Time management: a realistic approach.

    PubMed

    Jackson, Valerie P

    2009-06-01

    Realistic time management and organization plans can improve productivity and the quality of life. However, these skills can be difficult to develop and maintain. The key elements of time management are goals, organization, delegation, and relaxation. The author addresses each of these components and provides suggestions for successful time management.

  17. Anderson localization for chemically realistic systems

    NASA Astrophysics Data System (ADS)

    Terletska, Hanna

    2015-03-01

    Disorder which is ubiquitous for most materials can strongly effect their properties. It may change their electronic structures or even cause their localization, known as Anderson localization. Although, substantial progress has been achieved in the description of the Anderson localization, a proper mean-field theory of this phenomenon for more realistic systems remains elusive. Commonly used theoretical methods such as the coherent potential approximation and its cluster extensions fail to describe the Anderson transition, as the average density of states (DOS) employed in such theories is not critical at the transition. However, near the transition, due to the spatial confinement of carriers, the local DOS becomes highly skewed with a log-normal distribution, for which the most probable and the typical values differ noticeably from the average value. Dobrosavljevic et.al., incorporated such ideas in their typical medium theory (TMT), and showed that the typical (not average) DOS is critical at the transition. While the TMT is able to capture the localized states, as a local single site theory it still has several drawbacks. For the disorder Anderson model in three dimension it underestimates the critical disorder strength, and fails to capture the re-entrance behavior of the mobility edge. We have recently developed a cluster extension of the TMT, which addresses these drawbacks by systematically incorporating non-local corrections. This approach converges quickly with cluster size and allows us to incorporate the effect of interactions and realistic electronic structure. As the first steps towards realistic material modeling, we extended our TMDCA formalisms to systems with the off diagonal disorder and multiple bands structures. We also applied our TMDCA scheme to systems with both disorder and interactions and found that correlations effects tend to stabilize the metallic behavior even in two dimensions. This work was supported by DOE SciDAC Grant No. DE-FC02

  18. The Probabilities of Unique Events

    DTIC Science & Technology

    2012-08-30

    probabilities into quantum mechanics, and some psychologists have argued that they have a role to play in accounting for errors in judgment [30]. But, in...Discussion The mechanisms underlying naive estimates of the probabilities of unique events are largely inaccessible to consciousness , but they...Can quantum probability provide a new direc- tion for cognitive modeling? Behavioral and Brain Sciences (in press). 31. Paolacci G, Chandler J

  19. True-to-Life? Realistic Fiction.

    ERIC Educational Resources Information Center

    Jordan, Anne Devereaux

    1995-01-01

    Suggests that modern realistic fiction for young readers is intensely moralistic and directive at the spoken and unspoken behest of the adults who write, select, and buy that literature. Discusses moral tales, early realistic fiction, modern realistic fiction, and choosing realistic fiction. (RS)

  20. Realistic models of paracrystalline silicon

    NASA Astrophysics Data System (ADS)

    Nakhmanson, S. M.; Voyles, P. M.; Mousseau, Normand; Barkema, G. T.; Drabold, D. A.

    2001-06-01

    We present a procedure for the preparation of physically realistic models of paracrystalline silicon based on a modification of the bond-switching method of Wooten, Winer, and Weaire. The models contain randomly oriented c-Si grains embedded in a disordered matrix. Our technique creates interfaces between the crystalline and disordered phases of Si with an extremely low concentration of coordination defects. The resulting models possess structural and vibrational properties comparable with those of good continuous random network models of amorphous silicon and display realistic optical properties, correctly reproducing the electronic band gap of amorphous silicon. The largest of our models also shows the best agreement of any atomistic model structure that we tested with fluctuation microscopy experiments, indicating that this model has a degree of medium-range order closest to that of the real material.

  1. Calculations of the Probability of Positron Trapping by a Vacancy in a Metal and the Estimation of the Vacancy Contribution to the Work Function of Electrons and Positrons

    NASA Astrophysics Data System (ADS)

    Babich, A. V.; Pogosov, V. V.; Reva, V. I.

    2016-03-01

    The probabilities of the localization of positrons in monovacancies of Al and Cu have been calculated as functions of the energy and temperature. The vacancy was simulated by a void with a radius equal to the radius of the Wigner—Seitz cell in the model of stable jellium. Using the Fermi—Dirac golden rule for transitions, the formula for the rate of positron trapping by a vacancy has been derived as the function of the positron energy. For the thermalized positrons, the rate of localization near the triple point proved to be, on the order of magnitude, close to the rate of annihilation. Within the framework of our previously proposed models, the contribution of vacancies to the work function of electrons and positrons has been demonstrated based on the example of Al. The physical situations where the vacancy effect can manifest have been considered.

  2. Cluster membership probability: polarimetric approach

    NASA Astrophysics Data System (ADS)

    Medhi, Biman J.; Tamura, Motohide

    2013-04-01

    Interstellar polarimetric data of the six open clusters Hogg 15, NGC 6611, NGC 5606, NGC 6231, NGC 5749 and NGC 6250 have been used to estimate the membership probability for the stars within them. For proper-motion member stars, the membership probability estimated using the polarimetric data is in good agreement with the proper-motion cluster membership probability. However, for proper-motion non-member stars, the membership probability estimated by the polarimetric method is in total disagreement with the proper-motion cluster membership probability. The inconsistencies in the determined memberships may be because of the fundamental differences between the two methods of determination: one is based on stellar proper motion in space and the other is based on selective extinction of the stellar output by the asymmetric aligned dust grains present in the interstellar medium. The results and analysis suggest that the scatter of the Stokes vectors q (per cent) and u (per cent) for the proper-motion member stars depends on the interstellar and intracluster differential reddening in the open cluster. It is found that this method could be used to estimate the cluster membership probability if we have additional polarimetric and photometric information for a star to identify it as a probable member/non-member of a particular cluster, such as the maximum wavelength value (λmax), the unit weight error of the fit (σ1), the dispersion in the polarimetric position angles (overline{ɛ }), reddening (E(B - V)) or the differential intracluster reddening (ΔE(B - V)). This method could also be used to estimate the membership probability of known member stars having no membership probability as well as to resolve disagreements about membership among different proper-motion surveys.

  3. Realistic inflation models and primordial gravity waves

    NASA Astrophysics Data System (ADS)

    Rehman, Mansoor Ur

    We investigate both supersymmetric and non-supersymmetric realistic models of inflation. In non-supersymmetric models, inflation is successfully realized by employing both Coleman Weinberg and Higgs potentials in GUTs such as SU(5) and SO(10). The quantum smearing of tree level predictions is discussed in the Higgs inflation. These quantum corrections can arise from the inflaton couplings to other particles such as GUT scalars. As a result of including these corrections, a reduction in the tensor-to-scalar ratio r, a canonical measure of gravity waves produced during inflation, is observed. In a simple phi4 chaotic model, we reconsider a non-minimal (xi > 0) gravitationalcoupling of inflaton φ arising from the interaction xi R phi2, where R is the Ricci scalar. In estimating bounds on various inflationaryparameters we also include quantum corrections. We emphasize that while working with high precision observations such as the current Planck satellite experiment we cannot ignore these radiative and gravitational corrections in analyzing the predictions of various inflationary models. In supersymmetric hybrid inflation with minimal Kahler potential, the soft SUSY breaking terms are shown to play an important role in realizing inflation consistent with the latest WMAP data. The SUSY hybrid models which we consider here predict exceedingly small values of r. However, to obtain observable gravity waves the non-minimal Kahler potential turns out to be a necessary ingredient. A realistic model of flipped SU(5) model, which benefits from the absence of topological defects, is considered in the standard SUSY hybrid inflation. We also present a discussion of shifted hybrid inflation in a realistic model of SUSY SU(5) GUT.

  4. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013

    USGS Publications Warehouse

    Eash, David A.

    2015-01-01

    An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1-4 from the 1987 single-variable RREs and for flood regions 1-3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.

  5. Simulation of realistic retinoscopic measurement

    NASA Astrophysics Data System (ADS)

    Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.

    2007-03-01

    Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.

  6. Electromagnetic Scattering from Realistic Targets

    NASA Technical Reports Server (NTRS)

    Lee, Shung- Wu; Jin, Jian-Ming

    1997-01-01

    The general goal of the project is to develop computational tools for calculating radar signature of realistic targets. A hybrid technique that combines the shooting-and-bouncing-ray (SBR) method and the finite-element method (FEM) for the radiation characterization of microstrip patch antennas in a complex geometry was developed. In addition, a hybridization procedure to combine moment method (MoM) solution and the SBR method to treat the scattering of waveguide slot arrays on an aircraft was developed. A list of journal articles and conference papers is included.

  7. Estimating the probability of polyreactive antibodies 4E10 and 2F5 disabling a gp41 trimer after T cell-HIV adhesion.

    PubMed

    Hu, Bin; Liao, Hua-Xin; Alam, S Munir; Goldstein, Byron

    2014-01-01

    A few broadly neutralizing antibodies, isolated from HIV-1 infected individuals, recognize epitopes in the membrane proximal external region (MPER) of gp41 that are transiently exposed during viral entry. The best characterized, 4E10 and 2F5, are polyreactive, binding to the viral membrane and their epitopes in the MPER. We present a model to calculate, for any antibody concentration, the probability that during the pre-hairpin intermediate, the transient period when the epitopes are first exposed, a bound antibody will disable a trivalent gp41 before fusion is complete. When 4E10 or 2F5 bind to the MPER, a conformational change is induced that results in a stably bound complex. The model predicts that for these antibodies to be effective at neutralization, the time to disable an epitope must be shorter than the time the antibody remains bound in this conformation, about five minutes or less for 4E10 and 2F5. We investigate the role of avidity in neutralization and show that 2F5 IgG, but not 4E10, is much more effective at neutralization than its Fab fragment. We attribute this to 2F5 interacting more stably than 4E10 with the viral membrane. We use the model to elucidate the parameters that determine the ability of these antibodies to disable epitopes and propose an extension of the model to analyze neutralization data. The extended model predicts the dependencies of IC50 for neutralization on the rate constants that characterize antibody binding, the rate of fusion of gp41, and the number of gp41 bridging the virus and target cell at the start of the pre-hairpin intermediate. Analysis of neutralization experiments indicate that only a small number of gp41 bridges must be disabled to prevent fusion. However, the model cannot determine the exact number from neutralization experiments alone.

  8. Presupernova neutrinos: realistic emissivities from stellar evolution

    NASA Astrophysics Data System (ADS)

    Patton, Kelly; Lunardini, Cecilia; Farmer, Rob; Timmes, Frank

    2017-01-01

    We present a calculation of neutrino emissivities and energy spectra from a presupernova, a massive star going through the advanced stages of nuclear burning before becoming a supernova. Neutrinos produced from beta decay and electron capture, as well as pair annihilation, plasmon decay, and the photoneutrino process are included. We use the state of the art stellar evolution code MESA to obtain realistic conditions for temperature, density, electron fraction, and nuclear isotopic composition. We have found that beta processes contribute significantly to the neutrino flux at potentially detectable energies of a few MeV. Estimates for the number of events at several current and future detectors are presented for the last few hours before collapse.

  9. Universality of fixation probabilities in randomly structured populations.

    PubMed

    Adlam, Ben; Nowak, Martin A

    2014-10-27

    The stage of evolution is the population of reproducing individuals. The structure of the population is known to affect the dynamics and outcome of evolutionary processes, but analytical results for generic random structures have been lacking. The most general result so far, the isothermal theorem, assumes the propensity for change in each position is exactly the same, but realistic biological structures are always subject to variation and noise. We consider a finite population under constant selection whose structure is given by a variety of weighted, directed, random graphs; vertices represent individuals and edges interactions between individuals. By establishing a robustness result for the isothermal theorem and using large deviation estimates to understand the typical structure of random graphs, we prove that for a generalization of the Erdős-Rényi model, the fixation probability of an invading mutant is approximately the same as that of a mutant of equal fitness in a well-mixed population with high probability. Simulations of perturbed lattices, small-world networks, and scale-free networks behave similarly. We conjecture that the fixation probability in a well-mixed population, (1 - r(-1))/(1 - r(-n)), is universal: for many random graph models, the fixation probability approaches the above function uniformly as the graphs become large.

  10. Using hierarchical Bayesian multi-species mixture models to estimate tandem hoop-net based habitat associations and detection probabilities of fishes in reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.

  11. Definition of the Neutrosophic Probability

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin

    2014-03-01

    Neutrosophic probability (or likelihood) [1995] is a particular case of the neutrosophic measure. It is an estimation of an event (different from indeterminacy) to occur, together with an estimation that some indeterminacy may occur, and the estimation that the event does not occur. The classical probability deals with fair dice, coins, roulettes, spinners, decks of cards, random works, while neutrosophic probability deals with unfair, imperfect such objects and processes. For example, if we toss a regular die on an irregular surface which has cracks, then it is possible to get the die stuck on one of its edges or vertices in a crack (indeterminate outcome). The sample space is in this case: {1, 2, 3, 4, 5, 6, indeterminacy}. So, the probability of getting, for example 1, is less than 1/6. Since there are seven outcomes. The neutrosophic probability is a generalization of the classical probability because, when the chance of determinacy of a stochastic process is zero, these two probabilities coincide. The Neutrosophic Probability that of an event A occurs is NP (A) = (ch (A) , ch (indetA) , ch (A ̲)) = (T , I , F) , where T , I , F are subsets of [0,1], and T is the chance that A occurs, denoted ch(A); I is the indeterminate chance related to A, ch(indetermA) ; and F is the chance that A does not occur, ch (A ̲) . So, NP is a generalization of the Imprecise Probability as well. If T, I, and F are crisp numbers then: - 0 <= T + I + F <=3+ . We used the same notations (T,I,F) as in neutrosophic logic and set.

  12. Holographic Probabilities in Eternal Inflation

    NASA Astrophysics Data System (ADS)

    Bousso, Raphael

    2006-11-01

    In the global description of eternal inflation, probabilities for vacua are notoriously ambiguous. The local point of view is preferred by holography and naturally picks out a simple probability measure. It is insensitive to large expansion factors or lifetimes and so resolves a recently noted paradox. Any cosmological measure must be complemented with the probability for observers to emerge in a given vacuum. In lieu of anthropic criteria, I propose to estimate this by the entropy that can be produced in a local patch. This allows for prior-free predictions.

  13. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  14. Near-Unity Reaction Probability in Olefin Hydrogenation Promoted by Heterogeneous Metal Catalysts.

    PubMed

    Ebrahimi, Maryam; Simonovis, Juan Pablo; Zaera, Francisco

    2014-06-19

    The kinetics of the hydrogenation of ethylene on platinum surfaces was studied by using high-flux effusive molecular beams and reflection-absorption infrared spectroscopy (RAIRS). It was determined that steady-state ethylene conversion with probabilities close to unity could be achieved by using beams with ethylene fluxes equivalent to pressures in the mTorr range and high (≥100) H2:C2H4 ratios. The RAIRS data suggest that the high reaction probability is possible because such conditions lead to the removal of most of the ethylidyne layer known to form during catalysis. The observations from this study are contrasted with those under vacuum, where catalytic behavior is not sustainable, and with catalysis under more realistic atmospheric pressures, where reaction probabilities are estimated to be much lower (≤1 × 10(-5)).

  15. Most probable numbers of organisms: revised tables for the multiple tube method.

    PubMed

    Tillett, H E

    1987-10-01

    Estimation of numbers of organisms is often made using dilution series, for example when examining water samples for coliform organisms. In this paper the most probable numbers (MPNs) are calculated for a 15-tube series consisting of five replicates at three consecutive tenfold dilutions. Exact conditional probabilities are computed to replace previous approximations. When growth is observed in several of the tubes it is not realistic to select a single MPN. Instead a most probable range (MPR) should be reported. But using an MPR creates problems when comparison has to be made with a legislated, single-valued Standard. It is suggested that the wording of the Standards should be expressed differently when the multiple tube method is used.

  16. Detonation probabilities of high explosives

    SciTech Connect

    Eisenhawer, S.W.; Bott, T.F.; Bement, T.R.

    1995-07-01

    The probability of a high explosive violent reaction (HEVR) following various events is an extremely important aspect of estimating accident-sequence frequency for nuclear weapons dismantlement. In this paper, we describe the development of response curves for insults to PBX 9404, a conventional high-performance explosive used in US weapons. The insults during dismantlement include drops of high explosive (HE), strikes of tools and components on HE, and abrasion of the explosive. In the case of drops, we combine available test data on HEVRs and the results of flooring certification tests to estimate the HEVR probability. For other insults, it was necessary to use expert opinion. We describe the expert solicitation process and the methods used to consolidate the responses. The HEVR probabilities obtained from both approaches are compared.

  17. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  18. Realistic Detectability of Close Interstellar Comets

    NASA Astrophysics Data System (ADS)

    Cook, Nathaniel V.; Ragozzine, Darin; Granvik, Mikael; Stephens, Denise C.

    2016-07-01

    During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (ICs) (also known as extra-solar planetesimals or extra-solar comets) would give us in situ information about the efficiency and properties of planet formation throughout the galaxy. However, no ICs have ever been detected, despite the fact that their hyperbolic orbits would make them readily identifiable as unrelated to the solar system. Moro-Martín et al. have made a detailed and reasonable estimate of the properties of the IC population. We extend their estimates of detectability with a numerical model that allows us to consider “close” ICs, e.g., those that come within the orbit of Jupiter. We include several constraints on a “detectable” object that allow for realistic estimates of the frequency of detections expected from the Large Synoptic Survey Telescope (LSST) and other surveys. The influence of several of the assumed model parameters on the frequency of detections is explored in detail. Based on the expectation from Moro-Martín et al., we expect that LSST will detect 0.001-10 ICs during its nominal 10 year lifetime, with most of the uncertainty from the unknown number density of small (nuclei of ˜0.1-1 km) ICs. Both asteroid and comet cases are considered, where the latter includes various empirical prescriptions of brightening. Using simulated LSST-like astrometric data, we study the problem of orbit determination for these bodies, finding that LSST could identify their orbits as hyperbolic and determine an ephemeris sufficiently accurate for follow-up in about 4-7 days. We give the hyperbolic orbital parameters of the most detectable ICs. Taking the results into consideration, we give recommendations to future searches for ICs.

  19. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  20. The performance of inverse probability of treatment weighting and full matching on the propensity score in the presence of model misspecification when estimating the effect of treatment on survival outcomes.

    PubMed

    Austin, Peter C; Stuart, Elizabeth A

    2015-04-30

    There is increasing interest in estimating the causal effects of treatments using observational data. Propensity-score matching methods are frequently used to adjust for differences in observed characteristics between treated and control individuals in observational studies. Survival or time-to-event outcomes occur frequently in the medical literature, but the use of propensity score methods in survival analysis has not been thoroughly investigated. This paper compares two approaches for estimating the Average Treatment Effect (ATE) on survival outcomes: Inverse Probability of Treatment Weighting (IPTW) and full matching. The performance of these methods was compared in an extensive set of simulations that varied the extent of confounding and the amount of misspecification of the propensity score model. We found that both IPTW and full matching resulted in estimation of marginal hazard ratios with negligible bias when the ATE was the target estimand and the treatment-selection process was weak to moderate. However, when the treatment-selection process was strong, both methods resulted in biased estimation of the true marginal hazard ratio, even when the propensity score model was correctly specified. When the propensity score model was correctly specified, bias tended to be lower for full matching than for IPTW. The reasons for these biases and for the differences between the two methods appeared to be due to some extreme weights generated for each method. Both methods tended to produce more extreme weights as the magnitude of the effects of covariates on treatment selection increased. Furthermore, more extreme weights were observed for IPTW than for full matching. However, the poorer performance of both methods in the presence of a strong treatment-selection process was mitigated by the use of IPTW with restriction and full matching with a caliper restriction when the propensity score model was correctly specified.

  1. Evaluating species richness: biased ecological inference results from spatial heterogeneity in species detection probabilities

    USGS Publications Warehouse

    McNew, Lance B.; Handel, Colleen M.

    2015-01-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results

  2. Evaluating species richness: Biased ecological inference results from spatial heterogeneity in detection probabilities.

    PubMed

    McNew, Lance B; Handel, Colleen M

    2015-09-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multispecies occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real data set of bird observations in northwestern Alaska, USA, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our

  3. Probability and Relative Frequency

    NASA Astrophysics Data System (ADS)

    Drieschner, Michael

    2016-01-01

    The concept of probability seems to have been inexplicable since its invention in the seventeenth century. In its use in science, probability is closely related with relative frequency. So the task seems to be interpreting that relation. In this paper, we start with predicted relative frequency and show that its structure is the same as that of probability. I propose to call that the `prediction interpretation' of probability. The consequences of that definition are discussed. The "ladder"-structure of the probability calculus is analyzed. The expectation of the relative frequency is shown to be equal to the predicted relative frequency. Probability is shown to be the most general empirically testable prediction.

  4. Postpositivist Realist Theory: Identity and Representation Revisited

    ERIC Educational Resources Information Center

    Gilpin, Lorraine S.

    2006-01-01

    In postpositivist realist theory, people like Paula Moya (2000) and Satya Mohanty (2000) make a space that at once reflects and informs my location as a Third-World woman of color and a Black-immigrant educator in the United States. In postpositivist realist theory, understanding emerges from one's past and present experiences and interactions as…

  5. What Are Probability Surveys?

    EPA Pesticide Factsheets

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  6. Time-dependent earthquake probabilities

    USGS Publications Warehouse

    Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.

    2005-01-01

    We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.

  7. Landslide Probability Assessment by the Derived Distributions Technique

    NASA Astrophysics Data System (ADS)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  8. Realistic costs of carbon capture

    SciTech Connect

    Al Juaied, Mohammed . Belfer Center for Science and International Affiaris); Whitmore, Adam )

    2009-07-01

    There is a growing interest in carbon capture and storage (CCS) as a means of reducing carbon dioxide (CO2) emissions. However there are substantial uncertainties about the costs of CCS. Costs for pre-combustion capture with compression (i.e. excluding costs of transport and storage and any revenue from EOR associated with storage) are examined in this discussion paper for First-of-a-Kind (FOAK) plant and for more mature technologies, or Nth-of-a-Kind plant (NOAK). For FOAK plant using solid fuels the levelised cost of electricity on a 2008 basis is approximately 10 cents/kWh higher with capture than for conventional plants (with a range of 8-12 cents/kWh). Costs of abatement are found typically to be approximately US$150/tCO2 avoided (with a range of US$120-180/tCO2 avoided). For NOAK plants the additional cost of electricity with capture is approximately 2-5 cents/kWh, with costs of the range of US$35-70/tCO2 avoided. Costs of abatement with carbon capture for other fuels and technologies are also estimated for NOAK plants. The costs of abatement are calculated with reference to conventional SCPC plant for both emissions and costs of electricity. Estimates for both FOAK and NOAK are mainly based on cost data from 2008, which was at the end of a period of sustained escalation in the costs of power generation plant and other large capital projects. There are now indications of costs falling from these levels. This may reduce the costs of abatement and costs presented here may be 'peak of the market' estimates. If general cost levels return, for example, to those prevailing in 2005 to 2006 (by which time significant cost escalation had already occurred from previous levels), then costs of capture and compression for FOAK plants are expected to be US$110/tCO2 avoided (with a range of US$90-135/tCO2 avoided). For NOAK plants costs are expected to be US$25-50/tCO2. Based on these considerations a likely representative range of costs of abatement from CCS excluding

  9. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  10. Dynamical Simulation of Probabilities

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices(such as random number generators). Self-orgainizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed. Special attention was focused upon coupled stochastic processes, defined in terms of conditional probabilities, for which joint probability does not exist. Simulations of quantum probabilities are also discussed.

  11. Towards a Realistic Axion Star

    SciTech Connect

    Barranco, J.; Bernal, A.

    2008-07-02

    In this work we estimate the radius and the mass of a self-gravitating system made of axions. The quantum axion field satisfies the Klein-Gordon equation in a curved space-time and the metric components of this space-time are solutions to the Einstein equations with a source term given by the vacuum expectation value of the energy-momentum operator constructed from the axion field. As a first step towards an axion star we consider the up to the {phi}{sup 6} term in the axion potential expansion. We found that axion stars would have masses of the order of asteroids ({approx}10{sup -10}M{center_dot}) and radius of the order {approx} few centimeters.

  12. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented.

  13. Behaviorly realistic simulations of stock market traders with a soul

    NASA Astrophysics Data System (ADS)

    Solomon, Sorin

    1999-09-01

    The price fluctuations of the stocks in the financial markets are the result of the individual operations by many individual investors. However for many decades the financial theory did not use directly this “microscopic representation” of the markets. The main difficulties preventing this approach were solved recently with the advent of modern computer technology: - massive detailed data on the individual market operations became available; - “microscopic simulations” of the stock markets in terms of their individual participating agents allow very realistic treatment of the problem. By taking advantage of the modern computer processing and simulation techniques, we are now able to confront real market data with the results of simulating “microscopic” realistic models of the markets. These models have the potential to include and study the effects on the market of any desired feature in the investors behavior: departures from rationality, herding effects, heterogeneous investor-specific trading strategies. We propose to use the comparison of computer simulations of microscopic models with the actual market data in order to validate and enhance the knowledge on the financial behavior of individuals. Moreover we hope to explain, understand (and may be predict and control) macroscopic market dynamical features (e.g., cycles of booms and crashes, investors wealth distribution, market returns probability distribution etc.) based on realistic models using this knowledge.

  14. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  15. Toward a Direct Realist Account of Observation.

    ERIC Educational Resources Information Center

    Sievers, K. H.

    1999-01-01

    Criticizes the account of observation given by Alan Chalmers in "What Is This Thing Called Science?" and provides an alternative based on direct realist approaches to perception. Contains 15 references. (Author/WRM)

  16. Temporal Coding in Realistic Neural Networks

    NASA Astrophysics Data System (ADS)

    Gerasyuta, S. M.; Ivanov, D. V.

    1995-10-01

    The modification of realistic neural network model have been proposed. The model differs from the Hopfield model because of the two characteristic contributions to synaptic efficacious: the short-time contribution which is determined by the chemical reactions in the synapses and the long-time contribution corresponding to the structural changes of synaptic contacts. The approximation solution of the realistic neural network model equations is obtained. This solution allows us to calculate the postsynaptic potential as function of input. Using the approximate solution of realistic neural network model equations the behaviour of postsynaptic potential of realistic neural network as function of time for the different temporal sequences of stimuli is described. The various outputs are obtained for the different temporal sequences of the given stimuli. These properties of the temporal coding can be exploited as a recognition element capable of being selectively tuned to different inputs.

  17. Probability and radical behaviorism

    PubMed Central

    Espinosa, James M.

    1992-01-01

    The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforcement and extinction, respectively. PMID:22478114

  18. Statistics and Probability

    NASA Astrophysics Data System (ADS)

    Laktineh, Imad

    2010-04-01

    This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  19. Probability of satellite collision

    NASA Technical Reports Server (NTRS)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  20. PROBABILITY AND STATISTICS.

    DTIC Science & Technology

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  1. Factors influencing reporting and harvest probabilities in North American geese

    USGS Publications Warehouse

    Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.

    2009-01-01

    We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.

  2. Probability and Statistics.

    ERIC Educational Resources Information Center

    Barnes, Bernis, Ed.; And Others

    This teacher's guide to probability and statistics contains three major sections. The first section on elementary combinatorial principles includes activities, student problems, and suggested teaching procedures for the multiplication principle, permutations, and combinations. Section two develops an intuitive approach to probability through…

  3. Teachers' Understandings of Probability

    ERIC Educational Resources Information Center

    Liu, Yan; Thompson, Patrick

    2007-01-01

    Probability is an important idea with a remarkably wide range of applications. However, psychological and instructional studies conducted in the last two decades have consistently documented poor understanding of probability among different populations across different settings. The purpose of this study is to develop a theoretical framework for…

  4. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  5. Role of dissipation in realistic Majorana nanowires

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Xiao; Sau, Jay D.; Das Sarma, S.

    2017-02-01

    We carry out a realistic simulation of Majorana nanowires in order to understand the latest high-quality experimental data [H. Zhang et al., arXiv:1603.04069 (2016)] and, in the process, develop a comprehensive picture for what physical mechanisms may be operational in realistic nanowires leading to discrepancies between minimal theory and experimental observations (e.g., weakness and broadening of the zero-bias peak and breaking of particle-hole symmetry). Our focus is on understanding specific intriguing features in the data, and our goal is to establish matters of principle controlling the physics of the best possible nanowires available in current experiments. We identify dissipation, finite temperature, multi-sub-band effects, and the finite tunnel barrier as the four most important physical mechanisms controlling the zero-bias conductance peak. Our theoretical results including these realistic effects agree well with the best available experimental data in ballistic nanowires.

  6. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  7. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  8. Asteroidal collision probabilities

    NASA Astrophysics Data System (ADS)

    Bottke, W. F.; Greenberg, R.

    1993-05-01

    Several past calculations of collision probabilities between pairs of bodies on independent orbits have yielded inconsistent results. We review the methodologies and identify their various problems. Greenberg's (1982) collision probability formalism (now with a corrected symmetry assumption) is equivalent to Wetherill's (1967) approach, except that it includes a way to avoid singularities near apsides. That method shows that the procedure by Namiki and Binzel (1991) was accurate for those cases where singularities did not arise.

  9. Realistic Hot Water Draw Specification for Rating Solar Water Heaters: Preprint

    SciTech Connect

    Burch, J.

    2012-06-01

    In the United States, annual performance ratings for solar water heaters are simulated, using TMY weather and specified water draw. A more-realistic ratings draw is proposed that eliminates most bias by improving mains inlet temperature and by specifying realistic hot water use. This paper outlines the current and the proposed draws and estimates typical ratings changes from draw specification changes for typical systems in four cities.

  10. Probabilities in implicit learning.

    PubMed

    Tseng, Philip; Hsu, Tzu-Yu; Tzeng, Ovid J L; Hung, Daisy L; Juan, Chi-Hung

    2011-01-01

    The visual system possesses a remarkable ability in learning regularities from the environment. In the case of contextual cuing, predictive visual contexts such as spatial configurations are implicitly learned, retained, and used to facilitate visual search-all without one's subjective awareness and conscious effort. Here we investigated whether implicit learning and its facilitatory effects are sensitive to the statistical property of such implicit knowledge. In other words, are highly probable events learned better than less probable ones even when such learning is implicit? We systematically varied the frequencies of context repetition to alter the degrees of learning. Our results showed that search efficiency increased consistently as contextual probabilities increased. Thus, the visual contexts, along with their probability of occurrences, were both picked up by the visual system. Furthermore, even when the total number of exposures was held constant between each probability, the highest probability still enjoyed a greater cuing effect, suggesting that the temporal aspect of implicit learning is also an important factor to consider in addition to the effect of mere frequency. Together, these findings suggest that implicit learning, although bypassing observers' conscious encoding and retrieval effort, behaves much like explicit learning in the sense that its facilitatory effect also varies as a function of its associative strengths.

  11. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  12. Keeping It Real: How Realistic Does Realistic Fiction for Children Need to Be?

    ERIC Educational Resources Information Center

    O'Connor, Barbara

    2010-01-01

    O'Connor, an author of realistic fiction for children, shares her attempts to strike a balance between carefree, uncensored, authentic, realistic writing and age-appropriate writing. Of course, complicating that balancing act is the fact that what seems age-appropriate to her might not seem so to everyone. O'Connor suggests that while it may be…

  13. Comparison between realistic and spherical approaches in EEG forward modelling.

    PubMed

    Meneghini, Fabio; Vatta, Federica; Esposito, Fabrizio; Mininel, Stefano; Di Salle, Francesco

    2010-06-01

    In electroencephalography (EEG) a valid conductor model of the head (forward model) is necessary for predicting measurable scalp voltages from intra-cranial current distributions. All inverse models, capable of inferring the spatial distribution of the neural sources generating measurable electrical and magnetic signals outside the brain are normally formulated in terms of a pre-estimated forward model, which implies considering one (or more) current dipole(s) inside the head and computing the electrical potentials generated at the electrode sites on the scalp surface. Therefore, the accuracy of the forward model strongly affects the reliability of the source reconstruction process independently of the specific inverse model. So far, it is as yet unclear which brain regions are more sensitive to the choice of different model geometry, from both quantitative and qualitative points of view. In this paper, we compare the finite difference method-based realistic model with the four-layers sensor-fitted spherical model using simulated cortical sources in the MNI152 standard space. We focused on the investigation of the spatial variation of the lead fields produced by simulated cortical sources which were placed on the reconstructed mesh of the neocortex along the surface electrodes of a 62-channel configuration. This comparison is carried out by evaluating a point spread function all over the brain cortex, with the aim of finding the lead fields mismatch between realistic and spherical geometry. Realistic geometry turns out to be a relevant factor of improvement which is particularly important when considering sources placed in the temporal or in the occipital cortex. In these situations, using a realistic head model will allow a better spatial discrimination of neural sources when compared to the spherical model.

  14. UT Biomedical Informatics Lab (BMIL) probability wheel

    NASA Astrophysics Data System (ADS)

    Huang, Sheng-Cheng; Lee, Sara; Wang, Allen; Cantor, Scott B.; Sun, Clement; Fan, Kaili; Reece, Gregory P.; Kim, Min Soon; Markey, Mia K.

    A probability wheel app is intended to facilitate communication between two people, an "investigator" and a "participant", about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences.

  15. UT Biomedical Informatics Lab (BMIL) Probability Wheel.

    PubMed

    Huang, Sheng-Cheng; Lee, Sara; Wang, Allen; Cantor, Scott B; Sun, Clement; Fan, Kaili; Reece, Gregory P; Kim, Min Soon; Markey, Mia K

    2016-01-01

    A probability wheel app is intended to facilitate communication between two people, an "investigator" and a "participant," about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences.

  16. UT Biomedical Informatics Lab (BMIL) Probability Wheel

    PubMed Central

    Lee, Sara; Wang, Allen; Cantor, Scott B.; Sun, Clement; Fan, Kaili; Reece, Gregory P.; Kim, Min Soon; Markey, Mia K.

    2016-01-01

    A probability wheel app is intended to facilitate communication between two people, an “investigator” and a “participant,” about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences. PMID:28105462

  17. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  18. Faculty Development for Educators: A Realist Evaluation

    ERIC Educational Resources Information Center

    Sorinola, Olanrewaju O.; Thistlethwaite, Jill; Davies, David; Peile, Ed

    2015-01-01

    The effectiveness of faculty development (FD) activities for educators in UK medical schools remains underexplored. This study used a realist approach to evaluate FD and to test the hypothesis that motivation, engagement and perception are key mechanisms of effective FD activities. The authors observed and interviewed 33 course participants at one…

  19. A Look at Young Children's Realistic Fiction.

    ERIC Educational Resources Information Center

    Madsen, Jane M.; Wickersham, Elaine B.

    1980-01-01

    Analyzes recent realistic fiction for children produced in the United States in terms of ethnicity, stereotyped behavior, and themes. Concludes that the sample did not reflect equivalent treatment of males and females nor the culturally pluralistic makeup of U.S. society. Provides an annotated bibliography of the books analyzed. (Author/FL)

  20. Making a Literature Methods Course "Realistic."

    ERIC Educational Resources Information Center

    Lewis, William J.

    Recognizing that it can be a challenge to make an undergraduate literature methods course realistic, a methods instructor at a Michigan university has developed three major and several minor activities that have proven effective in preparing pre-student teachers for the "real world" of teaching and, at the same time, have been challenging and…

  1. Satellite Maps Deliver More Realistic Gaming

    NASA Technical Reports Server (NTRS)

    2013-01-01

    When Redwood City, California-based Electronic Arts (EA) decided to make SSX, its latest snowboarding video game, it faced challenges in creating realistic-looking mountains. The solution was NASA's ASTER Global Digital Elevation Map, made available by the Jet Propulsion Laboratory, which EA used to create 28 real-life mountains from 9 different ranges for its award-winning game.

  2. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  3. Improving Intuition Skills with Realistic Mathematics Education

    ERIC Educational Resources Information Center

    Hirza, Bonita; Kusumah, Yaya S.; Darhim; Zulkardi

    2014-01-01

    The intention of the present study was to see the improvement of students' intuitive skills. This improvement was seen by comparing the Realistic Mathematics Education (RME)-based instruction with the conventional mathematics instruction. The subject of this study was 164 fifth graders of elementary school in Palembang. The design of this study…

  4. Tool for Generating Realistic Residential Hot Water Event Schedules: Preprint

    SciTech Connect

    Hendron, B.; Burch, J.; Barker, G.

    2010-08-01

    The installed energy savings for advanced residential hot water systems can depend greatly on detailed occupant use patterns. Quantifying these patterns is essential for analyzing measures such as tankless water heaters, solar hot water systems with demand-side heat exchangers, distribution system improvements, and recirculation loops. This paper describes the development of an advanced spreadsheet tool that can generate a series of year-long hot water event schedules consistent with realistic probability distributions of start time, duration and flow rate variability, clustering, fixture assignment, vacation periods, and seasonality. This paper also presents the application of the hot water event schedules in the context of an integral-collector-storage solar water heating system in a moderate climate.

  5. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  6. Varga: On Probability.

    ERIC Educational Resources Information Center

    Varga, Tamas

    This booklet resulted from a 1980 visit by the author, a Hungarian mathematics educator, to the Teachers' Center Project at Southern Illinois University at Edwardsville. Included are activities and problems that make probablility concepts accessible to young children. The topics considered are: two probability games; choosing two beads; matching…

  7. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  8. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  9. Considerations for realistic ECCS evaluation methodology for LWRs

    SciTech Connect

    Rohatgi, U.S.; Saha, P.; Chexal, V.K.

    1985-01-01

    This paper identifies the various phenomena which govern the course of large and small break LOCAs in LWRs, and affect the key parameters such as Peak Clad Temperature (PCT) and timing of the end of blowdown, beginning of reflood, PCT, and complete quench. A review of the best-estimate models and correlations for these phenomena in the current literature has been presented. Finally, a set of models have been recommended which may be incorporated in a present best-estimate code such as TRAC or RELAP5 in order to develop a realistic ECCS evaluation methodology for future LWRs and have also been compared with the requirements of current ECCS evaluation methodology as outlined in Appendix K of 10CFR50. 58 refs.

  10. Understanding Y haplotype matching probability.

    PubMed

    Brenner, Charles H

    2014-01-01

    The Y haplotype population-genetic terrain is better explored from a fresh perspective rather than by analogy with the more familiar autosomal ideas. For haplotype matching probabilities, versus for autosomal matching probabilities, explicit attention to modelling - such as how evolution got us where we are - is much more important while consideration of population frequency is much less so. This paper explores, extends, and explains some of the concepts of "Fundamental problem of forensic mathematics - the evidential strength of a rare haplotype match". That earlier paper presented and validated a "kappa method" formula for the evidential strength when a suspect matches a previously unseen haplotype (such as a Y-haplotype) at the crime scene. Mathematical implications of the kappa method are intuitive and reasonable. Suspicions to the contrary raised in rest on elementary errors. Critical to deriving the kappa method or any sensible evidential calculation is understanding that thinking about haplotype population frequency is a red herring; the pivotal question is one of matching probability. But confusion between the two is unfortunately institutionalized in much of the forensic world. Examples make clear why (matching) probability is not (population) frequency and why uncertainty intervals on matching probabilities are merely confused thinking. Forensic matching calculations should be based on a model, on stipulated premises. The model inevitably only approximates reality, and any error in the results comes only from error in the model, the inexactness of the approximation. Sampling variation does not measure that inexactness and hence is not helpful in explaining evidence and is in fact an impediment. Alternative haplotype matching probability approaches that various authors have considered are reviewed. Some are based on no model and cannot be taken seriously. For the others, some evaluation of the models is discussed. Recent evidence supports the adequacy of

  11. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  12. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  13. Spectral tunability of realistic plasmonic nanoantennas

    SciTech Connect

    Portela, Alejandro; Matsui, Hiroaki; Tabata, Hitoshi; Yano, Takaaki; Hayashi, Tomohiro; Hara, Masahiko; Santschi, Christian; Martin, Olivier J. F.

    2014-09-01

    Single nanoantenna spectroscopy was carried out on realistic dipole nanoantennas with various arm lengths and gap sizes fabricated by electron-beam lithography. A significant difference in resonance wavelength between realistic and ideal nanoantennas was found by comparing their spectral response. Consequently, the spectral tunability (96 nm) of the structures was significantly lower than that of simulated ideal nanoantennas. These observations, attributed to the nanofabrication process, are related to imperfections in the geometry, added metal adhesion layer, and shape modifications, which are analyzed in this work. Our results provide important information for the design of dipole nanoantennas clarifying the role of the structural modifications on the resonance spectra, as supported by calculations.

  14. Regularization techniques in realistic Laplacian computation.

    PubMed

    Bortel, Radoslav; Sovka, Pavel

    2007-11-01

    This paper explores regularization options for the ill-posed spline coefficient equations in the realistic Laplacian computation. We investigate the use of the Tikhonov regularization, truncated singular value decomposition, and the so-called lambda-correction with the regularization parameter chosen by the L-curve, generalized cross-validation, quasi-optimality, and the discrepancy principle criteria. The provided range of regularization techniques is much wider than in the previous works. The improvement of the realistic Laplacian is investigated by simulations on the three-shell spherical head model. The conclusion is that the best performance is provided by the combination of the Tikhonov regularization and the generalized cross-validation criterion-a combination that has never been suggested for this task before.

  15. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    1999-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St Aldates Oxford OX1 3LB England phone... Oxford University ,Pembroke College,St Aldates,Oxford OX1 3LB England, 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...in my group. REFERENCES Clarke, L. (1999) Rogue Thermocouple Detection. MSc Thesis, Mathematical Institute, Oxford University . Hansen J. and L. A

  16. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    2000-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St. Aldates Oxford OX1 1DW United Kingdom...5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Pembroke College, Oxford University ,,St...evaluation: l-shadowing, probabilistic prediction and weather forecasting. D.Phil Thesis, Oxford University . Lorenz, E. (1995) Predictability-a Partially

  17. Probability for Weather and Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of

  18. Superpositions of probability distributions

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  19. Efficient Probability Sequences

    DTIC Science & Technology

    2014-08-18

    Ungar (2014), to produce a distinct forecasting system. The system consists of the method for eliciting individual subjective forecasts together with...E. Stone, and L. H. Ungar (2014). Two reasons to make aggregated probability forecasts more extreme. Decision Analysis 11 (2), 133–145. Bickel, J. E...Letters 91 (3), 425–429. Mellers, B., L. Ungar , J. Baron, J. Ramos, B. Gurcay, K. Fincher, S. E. Scott, D. Moore, P. Atanasov, S. A. Swift, et al. (2014

  20. Searching with Probabilities

    DTIC Science & Technology

    1983-07-26

    DeGroot , Morris H. Probability and Statistic. Addison-Wesley Publishing Company, Reading, Massachusetts, 1975. [Gillogly 78] Gillogly, J.J. Performance...distribution [ DeGroot 751 has just begun. The beta distribution has several features that might make it a more reasonable choice. As with the normal-based...1982. [Cooley 65] Cooley, J.M. and Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comp. 19, 1965. [ DeGroot 75

  1. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  2. Probability of relativistic electron trapping by parallel and oblique whistler-mode waves in Earth's radiation belts

    SciTech Connect

    Artemyev, A. V. Vasiliev, A. A.; Neishtadt, A. I.; Mourenas, D.; Krasnoselskikh, V.

    2015-11-15

    We investigate electron trapping by high-amplitude whistler-mode waves propagating at small as well as large angles relative to geomagnetic field lines. The inhomogeneity of the background magnetic field can result in an effective acceleration of trapped particles. Here, we derive useful analytical expressions for the probability of electron trapping by both parallel and oblique waves, paving the way for a full analytical description of trapping effects on the particle distribution. Numerical integrations of particle trajectories allow to demonstrate the accuracy of the derived analytical estimates. For realistic wave amplitudes, the levels of probabilities of trapping are generally comparable for oblique and parallel waves, but they turn out to be most efficient over complementary energy ranges. Trapping acceleration of <100 keV electrons is mainly provided by oblique waves, while parallel waves are responsible for the trapping acceleration of >100 keV electrons.

  3. Retrieve Tether Survival Probability

    DTIC Science & Technology

    2007-11-02

    cuts of the tether by meteorites and orbital debris , is calculated to be 99.934% for the planned experiment duration of six months or less. This is...due to the unlikely event of a strike by a large piece of orbital debris greater than 1 meter in size cutting all the lines of the tether at once. The...probability of the tether surviving multiple cuts by meteoroid and orbital debris impactors smaller than 5 cm in diameter is 99.9993% at six months

  4. Oil spill contamination probability in the southeastern Levantine basin.

    PubMed

    Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam

    2015-02-15

    Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed.

  5. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  6. Modality, probability, and mental models.

    PubMed

    Hinterecker, Thomas; Knauff, Markus; Johnson-Laird, P N

    2016-10-01

    We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both. Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning.

  7. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  8. Adiabatic Hyperspherical Analysis of Realistic Nuclear Potentials

    NASA Astrophysics Data System (ADS)

    Daily, K. M.; Kievsky, Alejandro; Greene, Chris H.

    2015-12-01

    Using the hyperspherical adiabatic method with the realistic nuclear potentials Argonne V14, Argonne V18, and Argonne V18 with the Urbana IX three-body potential, we calculate the adiabatic potentials and the triton bound state energies. We find that a discrete variable representation with the slow variable discretization method along the hyperradial degree of freedom results in energies consistent with the literature. However, using a Laguerre basis results in missing energy, even when extrapolated to an infinite number of basis functions and channels. We do not include the isospin T = 3/2 contribution in our analysis.

  9. MHD Modeling of the Transition Region Using Realistic Transport Coefficients

    NASA Astrophysics Data System (ADS)

    Goodman, Michael L.

    1997-05-01

    Most of the transition region (TR) consists of a collision dominated plasma. The dissipation and transport of energy in such a plasma is accurately described by the well known classical transport coefficients which include the electrical and thermal conductivity, viscosity, and thermo- electric tensors. These tensors are anisotropic and are functions of local values of temperature, density, and magnetic field. They may be used in an MHD model to obtain a self consistent, physically realistic description of the TR. The physics of kinetic processes is included in the MHD model through the transport coefficients. As a first step in studying heating and cooling processes in the TR in a realistic, quantitative manner, a 1.5 dimensional, steady state MHD model with a specified temperature profile is considered. The momentum equation includes the inertial, pressure gradient, Lorentz, and gravitational forces. The Ohm's law includes the exact expressions for the electrical conductivity and thermo- electric tensors. The electrical conductivity relates the generalized electric field to the conduction current density while the thermo-electric tensor relates the temperature gradient to the thermo-electric current density. The total current density is the sum of the two. It is found that the thermo-electric current density can be as large as the conduction current density, indicating that thermo-electric effects are probably important in modeling the dynamics of energy dissipation, such as wave dissipation, in the TR. Although the temperature gradient is in the vertical direction, the thermo-electric current density is in the horizontal direction, indicating the importance of the effects of anisotropic transport. The transport coefficients are valid for all magnetic field strengths, and so may be used to study the physics of weakly as well as strongly magnetized regions of the TR. Numerical examples are presented.

  10. Realistic Radio Communications in Pilot Simulator Training

    NASA Technical Reports Server (NTRS)

    Burki-Cohen, Judith; Kendra, Andrew J.; Kanki, Barbara G.; Lee, Alfred T.

    2000-01-01

    Simulators used for total training and evaluation of airline pilots must satisfy stringent criteria in order to assure their adequacy for training and checking maneuvers. Air traffic control and company radio communications simulation, however, may still be left to role-play by the already taxed instructor/evaluators in spite of their central importance in every aspect of the flight environment. The underlying premise of this research is that providing a realistic radio communications environment would increase safety by enhancing pilot training and evaluation. This report summarizes the first-year efforts of assessing the requirement and feasibility of simulating radio communications automatically. A review of the training and crew resource/task management literature showed both practical and theoretical support for the need for realistic radio communications simulation. A survey of 29 instructor/evaluators from 14 airlines revealed that radio communications are mainly role-played by the instructor/evaluators. This increases instructor/evaluators' own workload while unrealistically lowering pilot communications load compared to actual operations, with a concomitant loss in training/evaluation effectiveness. A technology review searching for an automated means of providing radio communications to and from aircraft with minimal human effort showed that while promising, the technology is still immature. Further research and the need for establishing a proof-of-concept are also discussed.

  11. Realistic Ground Motion Scenarios: Methodological Approach

    SciTech Connect

    Nunziata, C.; Peresan, A.; Romanelli, F.; Vaccari, F.; Zuccolo, E.; Panza, G. F.

    2008-07-08

    The definition of realistic seismic input can be obtained from the computation of a wide set of time histories, corresponding to possible seismotectonic scenarios. The propagation of the waves in the bedrock from the source to the local laterally varying structure is computed with the modal summation technique, while in the laterally heterogeneous structure the finite difference method is used. The definition of shear wave velocities within the soil cover is obtained from the non-linear inversion of the dispersion curve of group velocities of Rayleigh waves, artificially or naturally generated. Information about the possible focal mechanisms of the sources can be obtained from historical seismicity, based on earthquake catalogues and inversion of isoseismal maps. In addition, morphostructural zonation and pattern recognition of seismogenic nodes is useful to identify areas prone to strong earthquakes, based on the combined analysis of topographic, tectonic, geological maps and satellite photos. We show that the quantitative knowledge of regional geological structures and the computation of realistic ground motion can be a powerful tool for a preventive definition of the seismic hazard in Italy. Then, the formulation of reliable building codes, based on the evaluation of the main potential earthquakes, will have a great impact on the effective reduction of the seismic vulnerability of Italian urban areas, validating or improving the national building code.

  12. Epidemiology and causation: a realist view.

    PubMed Central

    Renton, A

    1994-01-01

    In this paper the controversy over how to decide whether associations between factors and diseases are causal is placed within a description of the public health and scientific relevance of epidemiology. It is argued that the rise in popularity of the Popperian view of science, together with a perception of the aims of epidemiology as being to identify appropriate public health interventions, have focussed this debate on unresolved questions of inferential logic, leaving largely unanalysed the notions of causation and of disease at the ontological level. A realist ontology of causation of disease and pathogenesis is constructed within the framework of "scientific materialism", and is shown to provide a coherent basis from which to decide causes and to deal with problems of confounding and interaction in epidemiological research. It is argued that a realist analysis identifies a richer role for epidemiology as an integral part of an ontologically unified medical science. It is this unified medical science as a whole rather than epidemiological observation or experiment which decides causes and, in turn, provides a key element to the foundations of rational public health decision making. PMID:8138775

  13. EEG/MEG error bounds for a dynamic dipole source with a realistic head model.

    PubMed

    Muravchik, C; Bria, O; Nehorai, A

    2000-06-01

    This work presents the background and derivation of Cramér-Rao bounds on the errors of estimating the parameters (moment and location) of a dynamic current dipole source using data from electro- and magneto-encephalography. A realistic head model, based on knowledge of surfaces separating tissues of different conductivities, is used.

  14. Characteristic length of the knotting probability revisited

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2015-09-01

    We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.

  15. Inclusion probability with dropout: an operational formula.

    PubMed

    Milot, E; Courteau, J; Crispino, F; Mailly, F

    2015-05-01

    In forensic genetics, a mixture of two or more contributors to a DNA profile is often interpreted using the inclusion probabilities theory. In this paper, we present a general formula for estimating the probability of inclusion (PI, also known as the RMNE probability) from a subset of visible alleles when dropouts are possible. This one-locus formula can easily be extended to multiple loci using the cumulative probability of inclusion. We show that an exact formulation requires fixing the number of contributors, hence to slightly modify the classic interpretation of the PI. We discuss the implications of our results for the enduring debate over the use of PI vs likelihood ratio approaches within the context of low template amplifications.

  16. The low synaptic release probability in vivo.

    PubMed

    Borst, J Gerard G

    2010-06-01

    The release probability, the average probability that an active zone of a presynaptic terminal releases one or more vesicles following an action potential, is tightly regulated. Measurements in cultured neurons or in slices indicate that this probability can vary greatly between synapses, but on average it is estimated to be as high as 0.5. In vivo, however, the size of synaptic potentials is relatively independent of recent history, suggesting that release probability is much lower. Possible causes for this discrepancy include maturational differences, a higher spontaneous activity, a lower extracellular calcium concentration and more prominent tonic inhibition by ambient neurotransmitters during in vivo recordings. Existing evidence thus suggests that under physiological conditions in vivo, presynaptic action potentials trigger the release of neurotransmitter much less frequently than what is observed in in vitro preparations.

  17. Extreme High-Temperature Events: Changes in their probabilities with Changes in Mean Temperature.

    NASA Astrophysics Data System (ADS)

    Mearns, Linda O.; Katz, Richard W.; Schneider, Stephen H.

    1984-12-01

    Most climate impact studies rely on changes in means of meteorological variables, such as temperature, to estimate potential climate impacts, including effects on agricultural production. However, extreme meteorological events, say, a short period of abnormally high temperatures, can have a significant harmful effect on crop growth and final yield. The characteristics of daily temperature time series, specifically mean, variance and autocorrelation, are analyzed to determine possible ranges of probabilities of certain extreme temperature events [e.g., runs of consecutive daily maximum temperatures of at least 95°F (35°C)] with changes in mean temperature of the time series. The extreme temperature events considered are motivated primarily by agricultural concerns, particularly, the effects of high temperatures on corn yields in the U.S. Corn Belt. However, runs of high temperatures can also affect, for example, energy demand or morbidity and mortality of animals and humans.The relationships between changes in mean temperature and the corresponding changes in the probabilities of these extreme temperature events are quite nonlinear, with relatively small changes in mean temperature sometimes resulting in relatively large changes in event probabilities. In particular, the likelihood of occurrence of a run of five consecutive daily maximum temperatures of at least 95°F under a 3°F (1.7°C) increase in the mean (holding the variance and autocorrelation constant) is about three times greater than that under the current climate at Des Moines, Moreover, by allowing either the variance or the autocorrelation as well as the mean to change, this likelihood of a run event varies over a relatively wide range of values. These changes in the probabilities of extreme events need to be taken into consideration in order to obtain realistic estimates of the impact of climate changes such as increases in mean temperature that may arise from increases in atmospheric carbon dioxide

  18. On the probability of exceeding allowable leak rates through degraded steam generator tubes

    SciTech Connect

    Cizelj, L.; Sorsek, I.; Riesch-Oppermann, H.

    1997-02-01

    This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds the predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.

  19. Notes on the Implementation of Non-Parametric Statistics within the Westinghouse Realistic Large Break LOCA Evaluation Model (ASTRUM)

    SciTech Connect

    Frepoli, Cesare; Oriani, Luca

    2006-07-01

    In recent years, non-parametric or order statistics methods have been widely used to assess the impact of the uncertainties within Best-Estimate LOCA evaluation models. The bounding of the uncertainties is achieved with a direct Monte Carlo sampling of the uncertainty attributes, with the minimum trial number selected to 'stabilize' the estimation of the critical output values (peak cladding temperature (PCT), local maximum oxidation (LMO), and core-wide oxidation (CWO A non-parametric order statistics uncertainty analysis was recently implemented within the Westinghouse Realistic Large Break LOCA evaluation model, also referred to as 'Automated Statistical Treatment of Uncertainty Method' (ASTRUM). The implementation or interpretation of order statistics in safety analysis is not fully consistent within the industry. This has led to an extensive public debate among regulators and researchers which can be found in the open literature. The USNRC-approved Westinghouse method follows a rigorous implementation of the order statistics theory, which leads to the execution of 124 simulations within a Large Break LOCA analysis. This is a solid approach which guarantees that a bounding value (at 95% probability) of the 95{sup th} percentile for each of the three 10 CFR 50.46 ECCS design acceptance criteria (PCT, LMO and CWO) is obtained. The objective of this paper is to provide additional insights on the ASTRUM statistical approach, with a more in-depth analysis of pros and cons of the order statistics and of the Westinghouse approach in the implementation of this statistical methodology. (authors)

  20. Helioseismology of a Realistic Magnetoconvective Sunspot Simulation

    NASA Technical Reports Server (NTRS)

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L., Jr.

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  1. HELIOSEISMOLOGY OF A REALISTIC MAGNETOCONVECTIVE SUNSPOT SIMULATION

    SciTech Connect

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L. Jr. E-mail: aaronb@cora.nwra.com E-mail: Thomas.L.Duvall@nasa.gov

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  2. Visualizing realistic 3D urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Aaron; Chen, Tuolin; Brunig, Michael; Schmidt, Hauke

    2003-05-01

    Visualizing complex urban environments has been an active research topic due to its wide variety of applications in city planning: road construction, emergency facilities planning, and optimal placement of wireless carrier base stations. Traditional 2D visualizations have been around for a long time but they only provide a schematic line-drawing bird's eye view and are sometimes confusing to understand due to the lack of depth information. Early versions of 3D systems have been developed for very expensive graphics workstations which seriously limited the availability. In this paper we describe a 3D visualization system for a desktop PC which integrates multiple resolutions of data and provides a realistic view of the urban environment.

  3. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  4. Probability state modeling theory.

    PubMed

    Bagwell, C Bruce; Hunsberger, Benjamin C; Herbert, Donald J; Munson, Mark E; Hill, Beth L; Bray, Chris M; Preffer, Frederic I

    2015-07-01

    As the technology of cytometry matures, there is mounting pressure to address two major issues with data analyses. The first issue is to develop new analysis methods for high-dimensional data that can directly reveal and quantify important characteristics associated with complex cellular biology. The other issue is to replace subjective and inaccurate gating with automated methods that objectively define subpopulations and account for population overlap due to measurement uncertainty. Probability state modeling (PSM) is a technique that addresses both of these issues. The theory and important algorithms associated with PSM are presented along with simple examples and general strategies for autonomous analyses. PSM is leveraged to better understand B-cell ontogeny in bone marrow in a companion Cytometry Part B manuscript. Three short relevant videos are available in the online supporting information for both of these papers. PSM avoids the dimensionality barrier normally associated with high-dimensionality modeling by using broadened quantile functions instead of frequency functions to represent the modulation of cellular epitopes as cells differentiate. Since modeling programs ultimately minimize or maximize one or more objective functions, they are particularly amenable to automation and, therefore, represent a viable alternative to subjective and inaccurate gating approaches.

  5. Demonstrating a Realistic IP Mission Prototype

    NASA Technical Reports Server (NTRS)

    Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith

    2003-01-01

    Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.

  6. [Synthesis of realistic human faces from CT image and color photos].

    PubMed

    Chen, Jun; Yang, Jie

    2008-06-01

    In order to predict the post-plastic-surgical effect of patients, we propose a method for synthesizing a high-definition realistic human face from three 2D digital color photos and 3D face model constructed from CT data. The simple genetic algorithm is used to estimate the projective parameters from 3D face model to 2D photos via the correspondence between manually selected 2D and 3D points. Finally, a high-definition realistic human face with detailed texture and smooth transition from front to side is synthesized utilizing the multi-texture mapping mechanism.

  7. Probability learning and Piagetian probability conceptions in children 5 to 12 years old.

    PubMed

    Kreitler, S; Zigler, E; Kreitler, H

    1989-11-01

    This study focused on the relations between performance on a three-choice probability-learning task and conceptions of probability as outlined by Piaget concerning mixture, normal distribution, random selection, odds estimation, and permutations. The probability-learning task and four Piagetian tasks were administered randomly to 100 male and 100 female, middle SES, average IQ children in three age groups (5 to 6, 8 to 9, and 11 to 12 years old) from different schools. Half the children were from Middle Eastern backgrounds, and half were from European or American backgrounds. As predicted, developmental level of probability thinking was related to performance on the probability-learning task. The more advanced the child's probability thinking, the higher his or her level of maximization and hypothesis formulation and testing and the lower his or her level of systematically patterned responses. The results suggest that the probability-learning and Piagetian tasks assess similar cognitive skills and that performance on the probability-learning task reflects a variety of probability concepts.

  8. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  9. MSPI False Indication Probability Simulations

    SciTech Connect

    Dana Kelly; Kurt Vedros; Robert Youngblood

    2011-03-01

    This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication is green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values

  10. Explosion probability of unexploded ordnance: expert beliefs.

    PubMed

    MacDonald, Jacqueline Anne; Small, Mitchell J; Morgan, M G

    2008-08-01

    This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15-140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution-suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies

  11. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  12. Coherent Assessment of Subjective Probability

    DTIC Science & Technology

    1981-03-01

    known results of de Finetti (1937, 1972, 1974), Smith (1961), and Savage (1971) and some recent results of Lind- ley (1980) concerning the use of...provides the motivation for de Finettis definition of subjective probabilities as coherent bet prices. From the definition of the probability measure...subjective probability, the probability laws which are traditionally stated as axioms or definitions are obtained instead as theorems. (De Finetti F -7

  13. Probabilities of transversions and transitions.

    PubMed

    Vol'kenshtein, M V

    1976-01-01

    The values of the mean relative probabilities of transversions and transitions have been refined on the basis of the data collected by Jukes and found to be equal to 0.34 and 0.66, respectively. Evolutionary factors increase the probability of transversions to 0.44. The relative probabilities of individual substitutions have been determined, and a detailed classification of the nonsense mutations has been given. Such mutations are especially probable in the UGG (Trp) codon. The highest probability of AG, GA transitions correlates with the lowest mean change in the hydrophobic nature of the amino acids coded.

  14. Determination of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Dietrich, Daniel L.; Ruff, Gary A.; Urban, David

    2013-01-01

    This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.

  15. Challenges and solutions for realistic room simulation

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    2002-05-01

    Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.

  16. Mechanical stabilization of the Levitron's realistic model

    NASA Astrophysics Data System (ADS)

    Olvera, Arturo; De la Rosa, Abraham; Giordano, Claudia M.

    2016-11-01

    The stability of the magnetic levitation showed by the Levitron was studied by M.V. Berry as a six degrees of freedom Hamiltonian system using an adiabatic approximation. Further, H.R. Dullin found critical spin rate bounds where the levitation persists and R.F. Gans et al. offered numerical results regarding the initial conditions' manifold where this occurs. In the line of this series of works, first, we extend the equations of motion to include dissipation for a more realistic model, and then introduce a mechanical forcing to inject energy into the system in order to prevent the Levitron from falling. A systematic study of the flying time as a function of the forcing parameters is carried out which yields detailed bifurcation diagrams showing an Arnold's tongues structure. The stability of these solutions were studied with the help of a novel method to compute the maximum Lyapunov exponent called MEGNO. The bifurcation diagrams for MEGNO reproduce the same Arnold's tongue structure.

  17. Differentiability of correlations in realistic quantum mechanics

    SciTech Connect

    Cabrera, Alejandro; Faria, Edson de; Pujals, Enrique; Tresser, Charles

    2015-09-15

    We prove a version of Bell’s theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumed to be twice differentiable at the origin, then we arrive at a version of Bell’s theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.

  18. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  19. Sialendoscopy Training: Presentation of a Realistic Model.

    PubMed

    Pascoto, Gabriela Robaskewicz; Stamm, Aldo Cassol; Lyra, Marcos

    2017-01-01

    Introduction Several surgical training simulators have been created for residents and young surgeons to gain experience with surgical procedures. Laboratory training is fundamental for acquiring familiarity with the techniques of surgery and skill in handing instruments. Objective The aim of this study is to present a novel simulator for training sialendoscopy. Method This realistic simulator was built with a synthetic thermo-retractile, thermo-sensible rubber which, when combined with different polymers, produces more than 30 different formulas. These formulas present textures, consistencies, and mechanical resistance are similar to many human tissues. Results The authors present a training model to practice sialendoscopy. All aspects of the procedure are simulated: month opening, dilatation of papillae, insert of the scope, visualization of stones, extraction of these stones with grasping or baskets, and finally, stone fragmentation with holmium laser. Conclusion This anatomical model for sialendoscopy training should be considerably useful to abbreviate the learning curve during the qualification of young surgeons while minimizing the consequences of technical errors.

  20. Sialendoscopy Training: Presentation of a Realistic Model

    PubMed Central

    Pascoto, Gabriela Robaskewicz; Stamm, Aldo Cassol; Lyra, Marcos

    2016-01-01

    Introduction Several surgical training simulators have been created for residents and young surgeons to gain experience with surgical procedures. Laboratory training is fundamental for acquiring familiarity with the techniques of surgery and skill in handing instruments. Objective The aim of this study is to present a novel simulator for training sialendoscopy. Method This realistic simulator was built with a synthetic thermo-retractile, thermo-sensible rubber which, when combined with different polymers, produces more than 30 different formulas. These formulas present textures, consistencies, and mechanical resistance are similar to many human tissues. Results The authors present a training model to practice sialendoscopy. All aspects of the procedure are simulated: month opening, dilatation of papillae, insert of the scope, visualization of stones, extraction of these stones with grasping or baskets, and finally, stone fragmentation with holmium laser. Conclusion This anatomical model for sialendoscopy training should be considerably useful to abbreviate the learning curve during the qualification of young surgeons while minimizing the consequences of technical errors. PMID:28050202

  1. Physics and Probability

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Milonni, P. W.

    2004-12-01

    Preface; 1. Recollection of an independent thinker Joel A. Snow; 2. A look back: early applications of maximum entropy estimation to quantum statistical mechanics D. J. Scalapino; 3. The Jaynes-Cummings revival B. W. Shore and P. L. Knight; 4. The Jaynes-Cummings model and the one-atom-master H. Walther; 5. The Jaynes-Cummings model is alive and well P. Meystre; 6. Self-consistent radiation reaction in quantum optics - Jaynes' influence and a new example in cavity QED J. H. Eberly; 7. Enhancing the index of refraction in a nonabsorbing medium: phaseonium versus a mixture of two-level atoms M. O. Scully, T. W. Hänsch, M. Fleischhauer, C. H. Keitel and Shi-Yao Zhu; 8. Ed Jaynes' steak dinner problem II Michael D. Crisp; 9. Source theory of vacuum field effects Peter W. Milonni; 10. The natural line shape Edwin A. Power; 11. An operational approach to Schrödinger's cat L. Mandel; 12. The classical limit of an atom C. R. Stroud, Jr.; 13. Mutual radiation reaction in spontaneous emission Richard J. Cook; 14. A model of neutron star dynamics F. W. Cummings; 15. The kinematic origin of complex wave function David Hestenes; 16. On radar target identification C. Ray Smith; 17. On the difference in means G. Larry Bretthorst; 18. Bayesian analysis, model selection and prediction Arnold Zellner and Chung-ki Min; 19. Bayesian numerical analysis John Skilling; 20. Quantum statistical inference R. N. Silver; 21. Application of the maximum entropy principle to nonlinear systems far from equilibrium H. Haken; 22. Nonequilibrium statistical mechanics Baldwin Robertson; 23. A backward look to the future E. T. James; Appendix. Vita and bibliography of Edwin T. Jaynes; Index.

  2. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  3. Non-local crime density estimation incorporating housing information.

    PubMed

    Woodworth, J T; Mohler, G O; Bertozzi, A L; Brantingham, P J

    2014-11-13

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H(1) Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image.

  4. Pointwise probability reinforcements for robust statistical inference.

    PubMed

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation.

  5. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  6. Advantages of Realistic Model Based on computational method: NDSHA versus Standard PSHA

    NASA Astrophysics Data System (ADS)

    Irwandi, I.

    2017-02-01

    Procedure of standard Seismic Hazard Assessment (PSHA) has a problem with over simplifying recurrence since being represented by a linear relationship. However, the relationship will be satisfied only if the size of the study area is large enough with respect to linear dimensions of sources.. PSHA lies in attenuation relations are usually not translation invariant in the phase space. Regarding the problem of completeness data, instead of using recurrence relationship DSHA select the most credible earthquakes. However, the DSHA remain lies in attenuation relations, assume the same propagation model for all the events, but such a hypothesis is not very realistic. On the contrary, NDSHA procedure has advantages in calculation strong ground motion from the realistic model of synthetic seismograms from source specific properties and cooperates with the available structural model. Additionally, The NDSHA produced more information such as PGD, PGV, PGA for horizontal and vertical components each. Using realistic computation for Banda Aceh, NDSHA provides an accurate value for each of those components, achieving probability of exceedance in the range between 10% to 2% probability of exceedance PGA from PSHA computation. Regarding some limitation from PSHA, Indonesia needs to establish research on NDSHA for the area has critical infrastructures to face the seismic hazard.

  7. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non

  8. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  9. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  10. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  11. A realistic molecular model of cement hydrates.

    PubMed

    Pellenq, Roland J-M; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J; Buehler, Markus J; Yip, Sidney; Ulm, Franz-Josef

    2009-09-22

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this "liquid stone" gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm(3)) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)(1.65)(SiO2)(H2O)(1.75), also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking.

  12. Fixation of strategies driven by switching probabilities in evolutionary games

    NASA Astrophysics Data System (ADS)

    Xu, Zimin; Zhang, Jianlei; Zhang, Chunyan; Chen, Zengqiang

    2016-12-01

    We study the evolutionary dynamics of strategies in finite populations which are homogeneous and well mixed by means of the pairwise comparison process, the core of which is the proposed switching probability. Previous studies about this subject are usually based on the known payoff comparison of the related players, which is an ideal assumption. In real social systems, acquiring the accurate payoffs of partners at each round of interaction may be not easy. So we bypass the need of explicit knowledge of payoffs, and encode the payoffs into the willingness of any individual shift from her current strategy to the competing one, and the switching probabilities are wholly independent of payoffs. Along this way, the strategy updating can be performed when game models are fixed and payoffs are unclear, expected to extend ideal assumptions to be more realistic one. We explore the impact of the switching probability on the fixation probability and derive a simple formula which determines the fixation probability. Moreover we find that cooperation dominates defection if the probability of cooperation replacing defection is always larger than the probability of defection replacing cooperation in finite populations. Last, we investigate the influences of model parameters on the fixation of strategies in the framework of three concrete game models: prisoner's dilemma, snowdrift game and stag-hunt game, which effectively portray the characteristics of cooperative dilemmas in real social systems.

  13. Tsunami probability in the Caribbean Region

    USGS Publications Warehouse

    Parsons, T.; Geist, E.L.

    2008-01-01

    We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.

  14. Variance estimation for stratified propensity score estimators.

    PubMed

    Williamson, E J; Morley, R; Lucas, A; Carpenter, J R

    2012-07-10

    Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.

  15. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  16. Information Processing Using Quantum Probability

    NASA Astrophysics Data System (ADS)

    Behera, Laxmidhar

    2006-11-01

    This paper presents an information processing paradigm that introduces collective response of multiple agents (computational units) while the level of intelligence associated with the information processing has been increased manifold. It is shown that if the potential field of the Schroedinger wave equation is modulated using a self-organized learning scheme, then the probability density function associated with the stochastic data is transferred to the probability amplitude function which is the response of the Schroedinger wave equation. This approach illustrates that information processing of data with stochastic behavior can be efficiently done using quantum probability instead of classical probability. The proposed scheme has been demonstrated through two applications: denoising and adaptive control.

  17. Maximum entropy principle and partial probability weighted moments

    NASA Astrophysics Data System (ADS)

    Deng, Jian; Pandey, M. D.; Xie, W. C.

    2012-05-01

    Maximum entropy principle (MaxEnt) is usually used for estimating the probability density function under specified moment constraints. The density function is then integrated to obtain the cumulative distribution function, which needs to be inverted to obtain a quantile corresponding to some specified probability. In such analysis, consideration of higher ordermoments is important for accurate modelling of the distribution tail. There are three drawbacks for this conventional methodology: (1) Estimates of higher order (>2) moments from a small sample of data tend to be highly biased; (2) It can merely cope with problems with complete or noncensored samples; (3) Only probability weighted moments of integer orders have been utilized. These difficulties inevitably induce bias and inaccuracy of the resultant quantile estimates and therefore have been the main impediments to the application of the MaxEnt Principle in extreme quantile estimation. This paper attempts to overcome these problems and presents a distribution free method for estimating the quantile function of a non-negative randomvariable using the principle of maximum partial entropy subject to constraints of the partial probability weighted moments estimated from censored sample. The main contributions include: (1) New concepts, i.e., partial entropy, fractional partial probability weighted moments, and partial Kullback-Leibler measure are elegantly defined; (2) Maximum entropy principle is re-formulated to be constrained by fractional partial probability weighted moments; (3) New distribution free quantile functions are derived. Numerical analyses are performed to assess the accuracy of extreme value estimates computed from censored samples.

  18. Estimating source regions of European emissions of trace gases from observations at Mace Head

    NASA Astrophysics Data System (ADS)

    Ryall, D. B.; Derwent, R. G.; Manning, A. J.; Simmonds, P. G.; O'Doherty, S.

    A technique is described for identifying probable source locations for a range of greenhouse and ozone-depleting trace gases from the long-term measurements made at Mace Head, Ireland. The Met. Office's dispersion model NAME is used to predict concentrations at Mace Head from all possible sources in Europe, then source regions identified as those which consistently lead to elevated concentrations at Mace Head. Estimates of European emissions and their distribution are presented for a number of trace gases for the period 1995-1998. Estimated emission patterns are realistic, given the nature and varied applications of the species considered. The results indicate that whilst there are limitations, useful information about source distribution can be extracted from continuous measurements at a remote site. It is probable that much improved estimates could be derived if observations were available from a number of sites. The ability to assess emissions has obvious implications in monitoring compliance with internationally agreed quota and protocols.

  19. Review of Literature for Model Assisted Probability of Detection

    SciTech Connect

    Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.

    2014-09-30

    This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.

  20. Probability Prediction and Classification. Research Report. RR-04-19

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2004-01-01

    Criteria for prediction of multinomial responses are examined in terms of estimation bias. Logarithmic penalty and least squares are quite similar in behavior but quite different from maximum probability. The differences ultimately reflect deficiencies in the behavior of the criterion of maximum probability.

  1. On the Provenance of Judgments of Conditional Probability

    ERIC Educational Resources Information Center

    Zhao, Jiaying; Shah, Anuj; Osherson, Daniel

    2009-01-01

    In standard treatments of probability, Pr(A[vertical bar]B) is defined as the ratio of Pr(A[intersection]B) to Pr(B), provided that Pr(B) greater than 0. This account of conditional probability suggests a psychological question, namely, whether estimates of Pr(A[vertical bar]B) arise in the mind via implicit calculation of…

  2. Realistic Bomber Training Initiative. Environmental Impact Statement. Volume 1

    DTIC Science & Technology

    2000-01-01

    language for the decisionmaker (in this case the Air Force) and the public the environmental process we used for RBTI. The product that we use to...TABLE OF CONTENTS Realistic Bomber Training Initiative Final EIS TABLE OF CONTENTS Chapter Page EXECUTIVE SUMMARY...Requirements . . . . . . . . . . . . . . . . . . . . . 1-4 1.2.3 Successful Combat Missions Require Realistic, Integrated Training

  3. Realistic Models for Filling Factors in HII Regions

    NASA Astrophysics Data System (ADS)

    Spangler, Steven R.; Costa, Allison H.; Bergerud, Brandon M.; Beauchamp, Kara M.

    2017-01-01

    One of the parameters used to describe HII regions and other ionized parts of the interstellar medium is the filling factor, defined as the volume fraction of an HII region occupied by matter. The best observational evidence for the existence of a filling factor less than unity is a discrepancy between the electron density derived from density-sensitive line ratios and the root mean square density obtained from emission measure measurements. Following the early, influential study by Osterbrock and Flather (ApJ 129, 26, 1959), most investigations of HII regions envision these objects as a group of isolated cells of high gas density embedded in a vacuum. This picture is at serious odds with more direct measurements of other astrophysical plasmas like the solar wind, where the density follows a less extreme probability distribution function (pdf) such as an exponential or lognormal. We have carried out a set of simulations in which model HII regions are created with different density pdfs such as exponential and lognormal as well as the extreme case of two delta functions. We calculate the electron density as inferred from spectroscopic line ratios and emission measures for all lines of sight through the model nebulas. In the cases of exponential and lognormal pdfs, the spectroscopically derived densities are higher than those obtained by the emission measures by factors of 20 to 100 percent. These are considerably smaller than values often reported in the literature, which can be an order of magnitude or greater. We will discuss possible ways to reconcile realistic density pdfs such as measured in space and laboratory plasmas with the results from astronomical spectroscopic measurements. Finally, we point out that for the Orion Nebula, the density discrepancy is due to geometry, not filling factor (O'Dell, ARAA 39, 99, 2001).

  4. Capture probabilities for secondary resonances

    NASA Technical Reports Server (NTRS)

    Malhotra, Renu

    1990-01-01

    A perturbed pendulum model is used to analyze secondary resonances, and it is shown that a self-similarity between secondary and primary resonances exists. Henrard's (1982) theory is used to obtain formulas for the capture probability into secondary resonances. The tidal evolution of Miranda and Umbriel is considered as an example, and significant probabilities of capture into secondary resonances are found.

  5. Realistic respiratory motion margins for external beam partial breast irradiation

    SciTech Connect

    Conroy, Leigh; Quirk, Sarah; Smith, Wendy L.

    2015-09-15

    Purpose: Respiratory margins for partial breast irradiation (PBI) have been largely based on geometric observations, which may overestimate the margin required for dosimetric coverage. In this study, dosimetric population-based respiratory margins and margin formulas for external beam partial breast irradiation are determined. Methods: Volunteer respiratory data and anterior–posterior (AP) dose profiles from clinical treatment plans of 28 3D conformal radiotherapy (3DCRT) PBI patient plans were used to determine population-based respiratory margins. The peak-to-peak amplitudes (A) of realistic respiratory motion data from healthy volunteers were scaled from A = 1 to 10 mm to create respiratory motion probability density functions. Dose profiles were convolved with the respiratory probability density functions to produce blurred dose profiles accounting for respiratory motion. The required margins were found by measuring the distance between the simulated treatment and original dose profiles at the 95% isodose level. Results: The symmetric dosimetric respiratory margins to cover 90%, 95%, and 100% of the simulated treatment population were 1.5, 2, and 4 mm, respectively. With patient set up at end exhale, the required margins were larger in the anterior direction than the posterior. For respiratory amplitudes less than 5 mm, the population-based margins can be expressed as a fraction of the extent of respiratory motion. The derived formulas in the anterior/posterior directions for 90%, 95%, and 100% simulated population coverage were 0.45A/0.25A, 0.50A/0.30A, and 0.70A/0.40A. The differences in formulas for different population coverage criteria demonstrate that respiratory trace shape and baseline drift characteristics affect individual respiratory margins even for the same average peak-to-peak amplitude. Conclusions: A methodology for determining population-based respiratory margins using real respiratory motion patterns and dose profiles in the AP direction was

  6. A probabilistic approach to realistic face synthesis with a single uncalibrated image.

    PubMed

    Shim, Hyunjung

    2012-08-01

    This paper presents a novel approach to automatic face modeling for realistic synthesis from an unknown face image, using a probabilistic face diffuse model and a generic face specular map. We construct a probabilistic face diffuse model for estimating the albedo and normals of the input face. Then, we develop a generic face specular map for estimating the specularity of face. Using the estimated albedo, normal and specular information, we can synthesize the face under arbitrary lighting and viewing directions realistically. Unlike many existing techniques, our approach can extract both the diffuse and specular information of face without involving an intensive 3D matching procedure. We conduct three different experiments to show our improvement over the prior art. First, we compare the proposed algorithm with previous techniques, including the state of the art, to demonstrate our achievement in realistic face synthesis. Moreover, we evaluate the proposed algorithm over non-automatic face modeling techniques through a subjective study. This evaluation is meaningful in that it tells us how far the proposed algorithm as well as others are from the real photograph in terms of the perceptual quality. Finally, we apply our face model for improving the face recognition performance under varying illumination conditions and show that the proposed algorithm is effective to enhance the face recognition rate. Thanks to the compact representation and the effective inference scheme, our technique is applicable for many practical applications, such as avatar creation, digital face cloning, face normalization, deidentification and many others.

  7. Proficiency Scaling Based on Conditional Probability Functions for Attributes

    DTIC Science & Technology

    1993-10-01

    4.1 Non-parametric regression estimates as probability functions for attributes Non- parametric estimation of the unknown density function f from a plot...as construction of confidence intervals for PFAs and further improvement of non- parametric estimation methods are not discussed in this paper. The... parametric estimation of PFAs will be illustrated with the attribute mastery patterns of SAT M Section 4. In the next section, analysis results will be

  8. Quantum computing with realistically noisy devices.

    PubMed

    Knill, E

    2005-03-03

    In theory, quantum computers offer a means of solving problems that would be intractable on conventional computers. Assuming that a quantum computer could be constructed, it would in practice be required to function with noisy devices called 'gates'. These gates cause decoherence of the fragile quantum states that are central to the computer's operation. The goal of so-called 'fault-tolerant quantum computing' is therefore to compute accurately even when the error probability per gate (EPG) is high. Here we report a simple architecture for fault-tolerant quantum computing, providing evidence that accurate quantum computing is possible for EPGs as high as three per cent. Such EPGs have been experimentally demonstrated, but to avoid excessive resource overheads required by the necessary architecture, lower EPGs are needed. Assuming the availability of quantum resources comparable to the digital resources available in today's computers, we show that non-trivial quantum computations at EPGs of as high as one per cent could be implemented.

  9. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  10. Failure probability under parameter uncertainty.

    PubMed

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications.

  11. Probabilities and Surprises: A Realist Approach to Identifying Linguistic and Social Patterns, with Reference to an Oral History Corpus

    ERIC Educational Resources Information Center

    Sealey, Alison

    2010-01-01

    The relationship between language and identity has been explored in a number of ways in applied linguistics, and this article focuses on a particular aspect of it: self-representation in the oral history interview. People from a wide range of backgrounds, currently resident in one large city in England, were asked to reflect on their lives as part…

  12. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  13. The role of ANS acuity and numeracy for the calibration and the coherence of subjective probability judgments

    PubMed Central

    Winman, Anders; Juslin, Peter; Lindskog, Marcus; Nilsson, Håkan; Kerimi, Neda

    2014-01-01

    The purpose of the study was to investigate how numeracy and acuity of the approximate number system (ANS) relate to the calibration and coherence of probability judgments. Based on the literature on number cognition, a first hypothesis was that those with lower numeracy would maintain a less linear use of the probability scale, contributing to overconfidence and nonlinear calibration curves. A second hypothesis was that also poorer acuity of the ANS would be associated with overconfidence and non-linearity. A third hypothesis, in line with dual-systems theory (e.g., Kahneman and Frederick, 2002) was that people higher in numeracy should have better access to the normative probability rules, allowing them to decrease the rate of conjunction fallacies. Data from 213 participants sampled from the Swedish population showed that: (i) in line with the first hypothesis, overconfidence and the linearity of the calibration curves were related to numeracy, where people higher in numeracy were well calibrated with zero overconfidence. (ii) ANS was not associated with ove