Science.gov

Sample records for realistic probability estimates

  1. Realistic Probability Estimates For Destructive Overpressure Events In Heated Center Wing Tanks Of Commercial Jet Aircraft

    SciTech Connect

    Alvares, N; Lambert, H

    2007-02-07

    The Federal Aviation Administration (FAA) identified 17 accidents that may have resulted from fuel tank explosions on commercial aircraft from 1959 to 2001. Seven events involved JP 4 or JP 4/Jet A mixtures that are no longer used for commercial aircraft fuel. The remaining 10 events involved Jet A or Jet A1 fuels that are in current use by the commercial aircraft industry. Four fuel tank explosions occurred in center wing tanks (CWTs) where on-board appliances can potentially transfer heat to the tank. These tanks are designated as ''Heated Center Wing Tanks'' (HCWT). Since 1996, the FAA has significantly increased the rate at which it has mandated airworthiness directives (ADs) directed at elimination of ignition sources. This effort includes the adoption, in 2001, of Special Federal Aviation Regulation 88 of 14 CFR part 21 (SFAR 88 ''Fuel Tank System Fault Tolerance Evaluation Requirements''). This paper addresses SFAR 88 effectiveness in reducing HCWT ignition source probability. Our statistical analysis, relating the occurrence of both on-ground and in-flight HCWT explosions to the cumulative flight hours of commercial passenger aircraft containing HCWT's reveals that the best estimate of HCWT explosion rate is 1 explosion in 1.4 x 10{sup 8} flight hours. Based on an analysis of SFAR 88 by Sandia National Laboratories and our independent analysis, SFAR 88 reduces current risk of historical HCWT explosion by at least a factor of 10, thus meeting an FAA risk criteria of 1 accident in billion flight hours. This paper also surveys and analyzes parameters for Jet A fuel ignition in HCWT's. Because of the paucity of in-flight HCWT explosions, we conclude that the intersection of the parameters necessary and sufficient to result in an HCWT explosion with sufficient overpressure to rupture the HCWT is extremely rare.

  2. Cold and hot cognition: quantum probability theory and realistic psychological modeling.

    PubMed

    Corr, Philip J

    2013-06-01

    Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma).

  3. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  4. Conditional probability density function estimation with sigmoidal neural networks.

    PubMed

    Sarajedini, A; Hecht-Nielsen, R; Chau, P M

    1999-01-01

    Real-world problems can often be couched in terms of conditional probability density function estimation. In particular, pattern recognition, signal detection, and financial prediction are among the multitude of applications requiring conditional density estimation. Previous developments in this direction have used neural nets to estimate statistics of the distribution or the marginal or joint distributions of the input-output variables. We have modified the joint distribution estimating sigmoidal neural network to estimate the conditional distribution. Thus, the probability density of the output conditioned on the inputs is estimated using a neural network. We have derived and implemented the learning laws to train the network. We show that this network has computational advantages over a brute force ratio of joint and marginal distributions. We also compare its performance to a kernel conditional density estimator in a larger scale (higher dimensional) problem simulating more realistic conditions.

  5. Biochemical transport modeling, estimation, and detection in realistic environments

    NASA Astrophysics Data System (ADS)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  6. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  7. Uncertainty analysis for Probable Maximum Precipitation estimates

    NASA Astrophysics Data System (ADS)

    Micovic, Zoran; Schaefer, Melvin G.; Taylor, George H.

    2015-02-01

    An analysis of uncertainty associated with Probable Maximum Precipitation (PMP) estimates is presented. The focus of the study is firmly on PMP estimates derived through meteorological analyses and not on statistically derived PMPs. Theoretical PMP cannot be computed directly and operational PMP estimates are developed through a stepwise procedure using a significant degree of subjective professional judgment. This paper presents a methodology for portraying the uncertain nature of PMP estimation by analyzing individual steps within the PMP derivation procedure whereby for each parameter requiring judgment, a set of possible values is specified and accompanied by expected probabilities. The resulting range of possible PMP values can be compared with the previously derived operational single-value PMP, providing measures of the conservatism and variability of the original estimate. To our knowledge, this is the first uncertainty analysis conducted for a PMP derived through meteorological analyses. The methodology was tested on the La Joie Dam watershed in British Columbia. The results indicate that the commonly used single-value PMP estimate could be more than 40% higher when possible changes in various meteorological variables used to derive the PMP are considered. The findings of this study imply that PMP estimates should always be characterized as a range of values recognizing the significant uncertainties involved in PMP estimation. In fact, we do not know at this time whether precipitation is actually upper-bounded, and if precipitation is upper-bounded, how closely PMP estimates approach the theoretical limit.

  8. Conflict Probability Estimation for Free Flight

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Heinz

    1996-01-01

    The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.

  9. Model estimates hurricane wind speed probabilities

    NASA Astrophysics Data System (ADS)

    Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III

    In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.

  10. Estimating flood exceedance probabilities in estuarine regions

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Leonard, Michael

    2016-04-01

    Flood events in estuarine regions can arise from the interaction of extreme rainfall and storm surge. Determining flood level exceedance probabilities in these regions is complicated by the dependence of these processes for extreme events. A comprehensive study of tide and rainfall gauges along the Australian coastline was conducted to determine the dependence of these extremes using a bivariate logistic threshold-excess model. The dependence strength is shown to vary as a function of distance over many hundreds of kilometres indicating that the dependence arises due to synoptic scale meteorological forcings. It is also shown to vary as a function of storm burst duration, time lag between the extreme rainfall and the storm surge event. The dependence estimates are then used with a bivariate design variable method to determine flood risk in estuarine regions for a number of case studies. Aspects of the method demonstrated in the case studies include, the resolution and range of the hydraulic response table, fitting of probability distributions, computational efficiency, uncertainty, potential variation in marginal distributions due to climate change, and application to two dimensional output from hydraulic models. Case studies are located on the Swan River (Western Australia), Nambucca River and Hawkesbury Nepean River (New South Wales).

  11. Estimation of transition probabilities of credit ratings

    NASA Astrophysics Data System (ADS)

    Peng, Gan Chew; Hin, Pooi Ah

    2015-12-01

    The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.

  12. Probability shapes perceptual precision: A study in orientation estimation.

    PubMed

    Jabar, Syaheed B; Anderson, Britt

    2015-12-01

    Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention."

  13. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  14. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  15. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning.

  16. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706

  17. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  18. Estimating the empirical probability of submarine landslide occurrence

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.; Mosher, David C.; Shipp, Craig; Moscardelli, Lorena; Chaytor, Jason D.; Baxter, Christopher D. P.; Lee, Homa J.; Urgeles, Roger

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  19. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  20. An application of recurrent nets to phone probability estimation.

    PubMed

    Robinson, A J

    1994-01-01

    This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation.

  1. Bayesian Estimator of Protein-Protein Association Probabilities

    2008-05-28

    The Bayesian Estimator of Protein-Protein Association Probabilities (BEPro3) is a software tool for estimating probabilities of protein-protein association between bait and prey protein pairs using data from multiple-bait, multiple-replicate, protein LC-MS/MS affinity isolation experiments. BEPro3 is public domain software, has been tested on Windows XP and version 10.4 or newer of the Mac OS 10.4, and is freely available. A user guide, example dataset with analysis and additional documentation are included with the BEPro3 download.

  2. Improving estimates of tree mortality probability using potential growth rate

    USGS Publications Warehouse

    Das, Adrian J.; Stephenson, Nathan L.

    2015-01-01

    Tree growth rate is frequently used to estimate mortality probability. Yet, growth metrics can vary in form, and the justification for using one over another is rarely clear. We tested whether a growth index (GI) that scales the realized diameter growth rate against the potential diameter growth rate (PDGR) would give better estimates of mortality probability than other measures. We also tested whether PDGR, being a function of tree size, might better correlate with the baseline mortality probability than direct measurements of size such as diameter or basal area. Using a long-term dataset from the Sierra Nevada, California, U.S.A., as well as existing species-specific estimates of PDGR, we developed growth–mortality models for four common species. For three of the four species, models that included GI, PDGR, or a combination of GI and PDGR were substantially better than models without them. For the fourth species, the models including GI and PDGR performed roughly as well as a model that included only the diameter growth rate. Our results suggest that using PDGR can improve our ability to estimate tree survival probability. However, in the absence of PDGR estimates, the diameter growth rate was the best empirical predictor of mortality, in contrast to assumptions often made in the literature.

  3. Simulation and Estimation of Extreme Quantiles and Extreme Probabilities

    SciTech Connect

    Guyader, Arnaud; Hengartner, Nicolas; Matzner-Lober, Eric

    2011-10-15

    Let X be a random vector with distribution {mu} on Double-Struck-Capital-R {sup d} and {Phi} be a mapping from Double-Struck-Capital-R {sup d} to Double-Struck-Capital-R . That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.

  4. Using Correlation to Compute Better Probability Estimates in Plan Graphs

    NASA Technical Reports Server (NTRS)

    Bryce, Daniel; Smith, David E.

    2006-01-01

    Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.

  5. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  6. Revising probability estimates: Why increasing likelihood means increasing impact.

    PubMed

    Maglio, Sam J; Polman, Evan

    2016-08-01

    Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record PMID:27281350

  7. Estimating transition probabilities in unmarked populations --entropy revisited

    USGS Publications Warehouse

    Cooch, E.G.; Link, W.A.

    1999-01-01

    The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.

  8. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  9. Estimating probable flaw distributions in PWR steam generator tubes

    SciTech Connect

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.

  10. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  11. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum

  12. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  13. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  14. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  15. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days. PMID:23771956

  16. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  17. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  18. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This

  19. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  20. Estimating the probability for major gene Alzheimer disease

    SciTech Connect

    Farrer, L.A. Boston Univ. School of Public Health, Boston, MA ); Cupples, L.A. )

    1994-02-01

    Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted risk estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.

  1. A Quantitative Method for Estimating Probable Public Costs of Hurricanes.

    PubMed

    BOSWELL; DEYLE; SMITH; BAKER

    1999-04-01

    / A method is presented for estimating probable public costs resulting from damage caused by hurricanes, measured as local government expenditures approved for reimbursement under the Stafford Act Section 406 Public Assistance Program. The method employs a multivariate model developed through multiple regression analysis of an array of independent variables that measure meteorological, socioeconomic, and physical conditions related to the landfall of hurricanes within a local government jurisdiction. From the regression analysis we chose a log-log (base 10) model that explains 74% of the variance in the expenditure data using population and wind speed as predictors. We illustrate application of the method for a local jurisdiction-Lee County, Florida, USA. The results show that potential public costs range from $4.7 million for a category 1 hurricane with winds of 137 kilometers per hour (85 miles per hour) to $130 million for a category 5 hurricane with winds of 265 kilometers per hour (165 miles per hour). Based on these figures, we estimate expected annual public costs of $2.3 million. These cost estimates: (1) provide useful guidance for anticipating the magnitude of the federal, state, and local expenditures that would be required for the array of possible hurricanes that could affect that jurisdiction; (2) allow policy makers to assess the implications of alternative federal and state policies for providing public assistance to jurisdictions that experience hurricane damage; and (3) provide information needed to develop a contingency fund or other financial mechanism for assuring that the community has sufficient funds available to meet its obligations. KEY WORDS: Hurricane; Public costs; Local government; Disaster recovery; Disaster response; Florida; Stafford Act

  2. Using coronal seismology to estimate the magnetic field strength in a realistic coronal model

    NASA Astrophysics Data System (ADS)

    Chen, F.; Peter, H.

    2015-09-01

    Aims: Coronal seismology is used extensively to estimate properties of the corona, e.g. the coronal magnetic field strength is derived from oscillations observed in coronal loops. We present a three-dimensional coronal simulation, including a realistic energy balance in which we observe oscillations of a loop in synthesised coronal emission. We use these results to test the inversions based on coronal seismology. Methods: From the simulation of the corona above an active region, we synthesise extreme ultraviolet emission from the model corona. From this, we derive maps of line intensity and Doppler shift providing synthetic data in the same format as obtained from observations. We fit the (Doppler) oscillation of the loop in the same fashion as done for observations to derive the oscillation period and damping time. Results: The loop oscillation seen in our model is similar to imaging and spectroscopic observations of the Sun. The velocity disturbance of the kink oscillation shows an oscillation period of 52.5 s and a damping time of 125 s, which are both consistent with the ranges of periods and damping times found in observations. Using standard coronal seismology techniques, we find an average magnetic field strength of Bkink = 79 G for our loop in the simulation, while in the loop the field strength drops from roughly 300 G at the coronal base to 50 G at the apex. Using the data from our simulation, we can infer what the average magnetic field derived from coronal seismology actually means. It is close to the magnetic field strength in a constant cross-section flux tube, which would give the same wave travel time through the loop. Conclusions: Our model produced a realistic looking loop-dominated corona, and provides realistic information on the oscillation properties that can be used to calibrate and better understand the result from coronal seismology. A movie associated with Fig. 1 is available in electronic form at http://www.aanda.org

  3. Semi-supervised dimensionality reduction using estimated class membership probabilities

    NASA Astrophysics Data System (ADS)

    Li, Wei; Ruan, Qiuqi; Wan, Jun

    2012-10-01

    In solving pattern-recognition tasks with partially labeled training data, the semi-supervised dimensionality reduction method, which considers both labeled and unlabeled data, is preferable for improving the classification and generalization capability of the testing data. Among such techniques, graph-based semi-supervised learning methods have attracted a lot of attention due to their appealing properties in discovering discriminative structure and geometric structure of data points. Although they have achieved remarkable success, they cannot promise good performance when the size of the labeled data set is small, as a result of inaccurate class matrix variance approximated by insufficient labeled training data. In this paper, we tackle this problem by combining class membership probabilities estimated from unlabeled data and ground-truth class information associated with labeled data to more precisely characterize the class distribution. Therefore, it is expected to enhance performance in classification tasks. We refer to this approach as probabilistic semi-supervised discriminant analysis (PSDA). The proposed PSDA is applied to face and facial expression recognition tasks and is evaluated using the ORL, Extended Yale B, and CMU PIE face databases and the Cohn-Kanade facial expression database. The promising experimental results demonstrate the effectiveness of our proposed method.

  4. Toward realistic and practical ideal observer (IO) estimation for the optimization of medical imaging systems.

    PubMed

    He, Xin; Caffo, Brian S; Frey, Eric C

    2008-10-01

    The ideal observer (IO) employs complete knowledge of the available data statistics and sets an upper limit on observer performance on a binary classification task. However, the IO test statistic cannot be calculated analytically, except for cases where object statistics are extremely simple. Kupinski have developed a Markov chain Monte Carlo (MCMC) based technique to compute the IO test statistic for, in principle, arbitrarily complex objects and imaging systems. In this work, we applied MCMC to estimate the IO test statistic in the context of myocardial perfusion SPECT (MPS). We modeled the imaging system using an analytic SPECT projector with attenuation, distant-dependent detector-response modeling and Poisson noise statistics. The object is a family of parameterized torso phantoms with variable geometric and organ uptake parameters. To accelerate the imaging simulation process and thus enable the MCMC IO estimation, we used discretized anatomic parameters and continuous uptake parameters in defining the objects. The imaging process simulation was modeled by precomputing projections for each organ for a finite number of discretely-parameterized anatomic parameters and taking linear combinations of the organ projections based on continuous sampling of the organ uptake parameters. The proposed method greatly reduces the computational burden and allows MCMC IO estimation for a realistic MPS imaging simulation. We validated the proposed IO estimation technique by estimating IO test statistics for a large number of input objects. The properties of the first- and second-order statistics of the IO test statistics estimated using the MCMC IO estimation technique agreed well with theoretical predictions. Further, as expected, the IO had better performance, as measured by the receiver operating characteristic (ROC) curve, than the Hotelling observer. This method is developed for SPECT imaging. However, it can be adapted to any linear imaging system.

  5. Estimation of capture probabilities using generalized estimating equations and mixed effects approaches

    PubMed Central

    Akanda, Md Abdus Salam; Alpizar-Jara, Russell

    2014-01-01

    Modeling individual heterogeneity in capture probabilities has been one of the most challenging tasks in capture–recapture studies. Heterogeneity in capture probabilities can be modeled as a function of individual covariates, but correlation structure among capture occasions should be taking into account. A proposed generalized estimating equations (GEE) and generalized linear mixed modeling (GLMM) approaches can be used to estimate capture probabilities and population size for capture–recapture closed population models. An example is used for an illustrative application and for comparison with currently used methodology. A simulation study is also conducted to show the performance of the estimation procedures. Our simulation results show that the proposed quasi-likelihood based on GEE approach provides lower SE than partial likelihood based on either generalized linear models (GLM) or GLMM approaches for estimating population size in a closed capture–recapture experiment. Estimator performance is good if a large proportion of individuals are captured. For cases where only a small proportion of individuals are captured, the estimates become unstable, but the GEE approach outperforms the other methods. PMID:24772290

  6. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  7. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables.

    PubMed

    Brus, D J; de Gruijter, J J

    2003-04-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be increased by interpolating the values at the nonprobability sample points to the probability sample points, and using these interpolated values as an auxiliary variable in the difference or regression estimator. These estimators are (approximately) unbiased, even when the nonprobability sample is severely biased such as in preferential samples. The gain in precision compared to the pi estimator in combination with Simple Random Sampling is controlled by the correlation between the target variable and interpolated variable. This correlation is determined by the size (density) and spatial coverage of the nonprobability sample, and the spatial continuity of the target variable. In a case study the average ratio of the variances of the simple regression estimator and pi estimator was 0.68 for preferential samples of size 150 with moderate spatial clustering, and 0.80 for preferential samples of similar size with strong spatial clustering. In the latter case the simple regression estimator was substantially more precise than the simple difference estimator.

  8. Estimation of the probability of error without ground truth and known a priori probabilities. [remote sensor performance

    NASA Technical Reports Server (NTRS)

    Havens, K. A.; Minster, T. C.; Thadani, S. G.

    1976-01-01

    The probability of error or, alternatively, the probability of correct classification (PCC) is an important criterion in analyzing the performance of a classifier. Labeled samples (those with ground truth) are usually employed to evaluate the performance of a classifier. Occasionally, the numbers of labeled samples are inadequate, or no labeled samples are available to evaluate a classifier's performance; for example, when crop signatures from one area from which ground truth is available are used to classify another area from which no ground truth is available. This paper reports the results of an experiment to estimate the probability of error using unlabeled test samples (i.e., without the aid of ground truth).

  9. Analytical solution to transient Richards' equation with realistic water profiles for vertical infiltration and parameter estimation

    NASA Astrophysics Data System (ADS)

    Hayek, Mohamed

    2016-06-01

    A general analytical model for one-dimensional transient vertical infiltration is presented. The model is based on a combination of the Brooks and Corey soil water retention function and a generalized hydraulic conductivity function. This leads to power law diffusivity and convective term for which the exponents are functions of the inverse of the pore size distribution index. Accordingly, the proposed analytical solution covers many existing realistic models in the literature. The general form of the analytical solution is simple and it expresses implicitly the depth as function of water content and time. It can be used to model infiltration through semi-infinite dry soils with prescribed water content or flux boundary conditions. Some mathematical expressions of practical importance are also derived. The general form solution is useful for comparison between models, validation of numerical solutions and for better understanding the effect of some hydraulic parameters. Based on the analytical expression, a complete inverse procedure which allows the estimation of the hydraulic parameters from water content measurements is presented.

  10. Estimating the concordance probability in a survival analysis with a discrete number of risk groups.

    PubMed

    Heller, Glenn; Mo, Qianxing

    2016-04-01

    A clinical risk classification system is an important component of a treatment decision algorithm. A measure used to assess the strength of a risk classification system is discrimination, and when the outcome is survival time, the most commonly applied global measure of discrimination is the concordance probability. The concordance probability represents the pairwise probability of lower patient risk given longer survival time. The c-index and the concordance probability estimate have been used to estimate the concordance probability when patient-specific risk scores are continuous. In the current paper, the concordance probability estimate and an inverse probability censoring weighted c-index are modified to account for discrete risk scores. Simulations are generated to assess the finite sample properties of the concordance probability estimate and the weighted c-index. An application of these measures of discriminatory power to a metastatic prostate cancer risk classification system is examined.

  11. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  12. Estimating the probability of coexistence in cross-feeding communities.

    PubMed

    Vessman, Björn; Gerlee, Philip; Lundh, Torbjörn

    2016-11-01

    The dynamics of many microbial ecosystems are driven by cross-feeding interactions, in which metabolites excreted by some species are metabolised further by others. The population dynamics of such ecosystems are governed by frequency-dependent selection, which allows for stable coexistence of two or more species. We have analysed a model of cross-feeding based on the replicator equation, with the aim of establishing criteria for coexistence in ecosystems containing three species, given the information of the three species' ability to coexist in their three separate pairs, i.e. the long term dynamics in the three two-species component systems. The triple-system is studied statistically and the probability of coexistence in the species triplet is computed for two models of species interactions. The interaction parameters are modelled either as stochastically independent or organised in a hierarchy where any derived metabolite carries less energy than previous nutrients in the metabolic chain. We differentiate between different modes of coexistence with respect to the pair-wise dynamics of the species, and find that the probability of coexistence is close to 12 for triplet systems with three pair-wise coexistent pairs and for the so-called intransitive systems. Systems with two and one pair-wise coexistent pairs are more likely to exist for random interaction parameters, but are on the other hand much less likely to exhibit triplet coexistence. Hence we conclude that certain species triplets are, from a statistical point of view, rare, but if allowed to interact are likely to coexist. This knowledge might be helpful when constructing synthetic microbial communities for industrial purposes. PMID:27484301

  13. Estimating background and threshold nitrate concentrations using probability graphs

    USGS Publications Warehouse

    Panno, S.V.; Kelly, W.R.; Martinsek, A.T.; Hackley, Keith C.

    2006-01-01

    Because of the ubiquitous nature of anthropogenic nitrate (NO 3-) in many parts of the world, determining background concentrations of NO3- in shallow ground water from natural sources is probably impossible in most environments. Present-day background must now include diffuse sources of NO3- such as disruption of soils and oxidation of organic matter, and atmospheric inputs from products of combustion and evaporation of ammonia from fertilizer and livestock waste. Anomalies can be defined as NO3- derived from nitrogen (N) inputs to the environment from anthropogenic activities, including synthetic fertilizers, livestock waste, and septic effluent. Cumulative probability graphs were used to identify threshold concentrations separating background and anomalous NO3-N concentrations and to assist in the determination of sources of N contamination for 232 spring water samples and 200 well water samples from karst aquifers. Thresholds were 0.4, 2.5, and 6.7 mg/L for spring water samples, and 0.1, 2.1, and 17 mg/L for well water samples. The 0.4 and 0.1 mg/L values are assumed to represent thresholds for present-day precipitation. Thresholds at 2.5 and 2.1 mg/L are interpreted to represent present-day background concentrations of NO3-N. The population of spring water samples with concentrations between 2.5 and 6.7 mg/L represents an amalgam of all sources of NO3- in the ground water basins that feed each spring; concentrations >6.7 mg/L were typically samples collected soon after springtime application of synthetic fertilizer. The 17 mg/L threshold (adjusted to 15 mg/L) for well water samples is interpreted as the level above which livestock wastes dominate the N sources. Copyright ?? 2006 The Author(s).

  14. Student Estimates of Probability and Uncertainty in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Mountcastle, Donald B.; Bucy, B. R.; Thompson, J. R.

    2006-12-01

    Equilibrium properties of macroscopic (large N) systems are highly predictable as N approaches and exceeds Avogadro’s number. Theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity [S = k ln(w), where w is the system multiplicity] include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our students usually give reasonable answers about the probabilities, but not the uncertainties of the predicted outcomes of such events. However, they reliably predict that the uncertainty in a measured quantity (e.g., the amount of rainfall) decreases as the number of measurements increases. Typical textbook presentations presume that students will either have or develop the insight that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. That is at odds with our findings among students in two successive statistical mechanics classes. Many of our students had previously completed mathematics courses in statistics, as well as a physics laboratory course that included analysis of statistical properties of distributions of dart scores as the number (n) of throws (one-dimensional target) increased. There was a wide divergence of predictions about how the standard deviation of the distribution of dart scores should change, or not, as n increases. We find that student predictions about statistics of coin flips, dart scores, and rainfall amounts as functions of n are inconsistent at best. Supported in part by NSF Grant #PHY-0406764.

  15. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  16. A simulation model for estimating probabilities of defects in welds

    SciTech Connect

    Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.

    1996-12-01

    In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.

  17. Probability Estimation of CO2 Leakage Through Faults at Geologic Carbon Sequestration Sites

    SciTech Connect

    Zhang, Yingqi; Oldenburg, Curt; Finsterle, Stefan; Jordan, Preston; Zhang, Keni

    2008-11-01

    Leakage of CO{sub 2} and brine along faults at geologic carbon sequestration (GCS) sites is a primary concern for storage integrity. The focus of this study is on the estimation of the probability of leakage along faults or fractures. This leakage probability is controlled by the probability of a connected network of conduits existing at a given site, the probability of this network encountering the CO{sub 2} plume, and the probability of this network intersecting environmental resources that may be impacted by leakage. This work is designed to fit into a risk assessment and certification framework that uses compartments to represent vulnerable resources such as potable groundwater, health and safety, and the near-surface environment. The method we propose includes using percolation theory to estimate the connectivity of the faults, and generating fuzzy rules from discrete fracture network simulations to estimate leakage probability. By this approach, the probability of CO{sub 2} escaping into a compartment for a given system can be inferred from the fuzzy rules. The proposed method provides a quick way of estimating the probability of CO{sub 2} or brine leaking into a compartment. In addition, it provides the uncertainty range of the estimated probability.

  18. Estimators of annual probability of infection for quantitative microbial risk assessment.

    PubMed

    Karavarsamis, N; Hamilton, A J

    2010-06-01

    Four estimators of annual infection probability were compared pertinent to Quantitative Microbial Risk Analysis (QMRA). A stochastic model, the Gold Standard, was used as the benchmark. It is a product of independent daily infection probabilities which in turn are based on daily doses. An alternative and commonly-used estimator, here referred to as the Naïve, assumes a single daily infection probability from a single value of daily dose. The typical use of this estimator in stochastic QMRA involves the generation of a distribution of annual infection probabilities, but since each of these is based on a single realisation of the dose distribution, the resultant annual infection probability distribution simply represents a set of inaccurate estimates. While the medians of both distributions were within an order of magnitude for our test scenario, the 95th percentiles, which are sometimes used in QMRA as conservative estimates of risk, differed by around one order of magnitude. The other two estimators examined, the Geometric and Arithmetic, were closely related to the Naïve and use the same equation, and both proved to be poor estimators. Lastly, this paper proposes a simple adjustment to the Gold Standard equation accommodating periodic infection probabilities when the daily infection probabilities are unknown.

  19. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).

  20. Experimental estimation of the photons visiting probability profiles in time-resolved diffuse reflectance measurement

    NASA Astrophysics Data System (ADS)

    Sawosz, P.; Kacprzak, M.; Weigl, W.; Borowska-Solonynko, A.; Krajewski, P.; Zolek, N.; Ciszek, B.; Maniewski, R.; Liebert, A.

    2012-12-01

    A time-gated intensified CCD camera was applied for time-resolved imaging of light penetrating in an optically turbid medium. Spatial distributions of light penetration probability in the plane perpendicular to the axes of the source and the detector were determined at different source positions. Furthermore, visiting probability profiles of diffuse reflectance measurement were obtained by the convolution of the light penetration distributions recorded at different source positions. Experiments were carried out on homogeneous phantoms, more realistic two-layered tissue phantoms based on the human skull filled with Intralipid-ink solution and on cadavers. It was noted that the photons visiting probability profiles depend strongly on the source-detector separation, the delay between the laser pulse and the photons collection window and the complex tissue composition of the human head.

  1. Coupling of realistic rate estimates with genomic for Assessing Contaminant Attenuation and Long-Term Phone

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2003-06-01

    Dissolved dense nonaqueous-phase liquid plumes are persistent, widespread problems in the DOE complex. While perceived as being difficult to degrade, at the Idaho National Engineering and Environmental Laboratory, dissolved trichloroethylene (TCE) is disappearing from the Snake River Plain aquifer (SRPA) by natural attenuation, a finding that saves significant site restoration costs. Acceptance of monitored natural attenuation as a preferred treatment technology requires direct proof of the process and rate of the degradation. Our proposal aims to provide that proof for one such site by testing two hypotheses. First, we believe that realistic values for in situ rates of TCE cometabolism can be obtained by sustaining the putative microorganisms at the low catabolic activities consistent with aquifer conditions. Second, the patterns of functional gene expression evident in these communities under starvation conditions while carrying out TCE cometabolism can be used to diagnose the cometabolic activity in the aquifer itself. Using the cometabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained at this location and validate the long term stewardship of this plume. Realistic terms for cometabolism of TCE will provide marked improvements in DOE's ability to predict and monitor natural attenuation of chlorinated organics at other sites, increase the acceptability of this solution, and provide significant economic and health benefits through this noninvasive remediation strategy. Finally, this project will derive valuable genomic information about the functional attributes of subsurface microbial communities upon which DOE must depend to resolve some of its most difficult contamination issues.

  2. Simultaneous estimation of b-values and detection rates of earthquakes for the application to aftershock probability forecasting

    NASA Astrophysics Data System (ADS)

    Katsura, K.; Ogata, Y.

    2004-12-01

    Reasenberg and Jones [Science, 1989, 1994] proposed the aftershock probability forecasting based on the joint distribution [Utsu, J. Fac. Sci. Hokkaido Univ., 1970] of the modified Omori formula of aftershock decay and Gutenberg-Richter law of magnitude frequency, where the respective parameters are estimated by the maximum likelihood method [Ogata, J. Phys. Earth, 1983; Utsu, Geophys Bull. Hokkaido Univ., 1965, Aki, Bull. Earthq. Res. Inst., 1965]. The public forecast has been implemented by the responsible agencies in California and Japan. However, a considerable difficulty in the above procedure is that, due to the contamination of arriving seismic waves, detection rate of aftershocks is extremely low during a period immediately after the main shock, say, during the first day, when the forecasting is most critical for public in the affected area. Therefore, for the forecasting of a probability during such a period, they adopt a generic model with a set of the standard parameter values in California or Japan. For an effective and realistic estimation, I propose to utilize the statistical model introduced by Ogata and Katsura [Geophys. J. Int., 1993] for the simultaneous estimation of the b-values of Gutenberg-Richter law together with detection-rate (probability) of earthquakes of each magnitude-band from the provided data of all detected events, where the both parameters are allowed for changing in time. Thus, by using all detected aftershocks from the beginning of the period, we can estimate the underlying modified Omori rate of both detected and undetected events and their b-value changes, taking the time-varying missing rates of events into account. The similar computation is applied to the ETAS model for complex aftershock activity or regional seismicity where substantial missing events are expected immediately after a large aftershock or another strong earthquake in the vicinity. Demonstrations of the present procedure will be shown for the recent examples

  3. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  4. Generalizations and Extensions of the Probability of Superiority Effect Size Estimator

    ERIC Educational Resources Information Center

    Ruscio, John; Gera, Benjamin Lee

    2013-01-01

    Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…

  5. Improving quality of sample entropy estimation for continuous distribution probability functions

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2016-05-01

    Entropy is a one of the key parameters characterizing state of system in statistical physics. Although, the entropy is defined for systems described by discrete and continuous probability distribution function (PDF), in numerous applications the sample entropy is estimated by a histogram, which, in fact, denotes that the continuous PDF is represented by a set of probabilities. Such a procedure may lead to ambiguities and even misinterpretation of the results. Within this paper, two possible general algorithms based on continuous PDF estimation are discussed in the application to the Shannon and Tsallis entropies. It is shown that the proposed algorithms may improve entropy estimation, particularly in the case of small data sets.

  6. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  7. Estimating transition probabilities for stage-based population projection matrices using capture-recapture data

    USGS Publications Warehouse

    Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.

    1992-01-01

    In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).

  8. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  9. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The

  10. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  11. Estimating probabilities of reservoir storage for the upper Delaware River basin

    USGS Publications Warehouse

    Hirsch, Robert M.

    1981-01-01

    A technique for estimating conditional probabilities of reservoir system storage is described and applied to the upper Delaware River Basin. The results indicate that there is a 73 percent probability that the three major New York City reservoirs (Pepacton, Cannonsville, and Neversink) would be full by June 1, 1981, and only a 9 percent probability that storage would return to the ' drought warning ' sector of the operations curve sometime in the next year. In contrast, if restrictions are lifted and there is an immediate return to normal operating policies, the probability of the reservoir system being full by June 1 is 37 percent and the probability that storage would return to the ' drought warning ' sector in the next year is 30 percent. (USGS)

  12. Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.

    2016-09-01

    The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.

  13. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  14. Impaired probability estimation and decision-making in pathological gambling poker players.

    PubMed

    Linnet, Jakob; Frøslev, Mette; Ramsgaard, Stine; Gebauer, Line; Mouridsen, Kim; Wohlert, Victoria

    2012-03-01

    Poker has gained tremendous popularity in recent years, increasing the risk for some individuals to develop pathological gambling. Here, we investigated cognitive biases in a computerized two-player poker task against a fictive opponent, among 12 pathological gambling poker players (PGP), 10 experienced poker players (ExP), and 11 inexperienced poker players (InP). Players were compared on probability estimation and decision-making with the hypothesis that ExP would have significantly lower cognitive biases than PGP and InP, and that the groups could be differentiated based on their cognitive bias styles. The results showed that ExP had a significantly lower average error margin in probability estimation than PGP and InP, and that PGP played hands with lower winning probability than ExP. Binomial logistic regression showed perfect differentiation (100%) between ExP and PGP, and 90.5% classification accuracy between ExP and InP. Multinomial logistic regression showed an overall classification accuracy of 23 out of 33 (69.7%) between the three groups. The classification accuracy of ExP was higher than that of PGP and InP due to the similarities in probability estimation and decision-making between PGP and InP. These impairments in probability estimation and decision-making of PGP may have implications for assessment and treatment of cognitive biases in pathological gambling poker players.

  15. Differential Survival in Europe and the United States: Estimates Based on Subjective Probabilities of Survival

    PubMed Central

    Delavande, Adeline; Rohwedder, Susann

    2013-01-01

    Cross-country comparisons of differential survival by socioeconomic status (SES) are useful in many domains. Yet, to date, such studies have been rare. Reliably estimating differential survival in a single country has been challenging because it requires rich panel data with a large sample size. Cross-country estimates have proven even more difficult because the measures of SES need to be comparable internationally. We present an alternative method for acquiring information on differential survival by SES. Rather than using observations of actual survival, we relate individuals’ subjective probabilities of survival to SES variables in cross section. To show that subjective survival probabilities are informative proxies for actual survival when estimating differential survival, we compare estimates of differential survival based on actual survival with estimates based on subjective probabilities of survival for the same sample. The results are remarkably similar. We then use this approach to compare differential survival by SES for 10 European countries and the United States. Wealthier people have higher survival probabilities than those who are less wealthy, but the strength of the association differs across countries. Nations with a smaller gradient appear to be Belgium, France, and Italy, while the United States, England, and Sweden appear to have a larger gradient. PMID:22042664

  16. Probability of Error in Estimating States of a Flow of Physical Events

    NASA Astrophysics Data System (ADS)

    Gortsev, A. M.; Solov'ev, A. A.

    2016-09-01

    A flow of physical events (photons, electrons, etc.) is considered. One of the mathematical models of such flows is the MAP flow of events. Analytical results for conditional and unconditional probabilities of erroneous decision in optimal estimation of states of the MAP flow of events are presented.

  17. Hitchhikers on trade routes: A phenology model estimates the probabilities of gypsy moth introduction and establishment.

    PubMed

    Gray, David R

    2010-12-01

    As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia).

  18. Simple indicator kriging for estimating the probability of incorrectly delineating hazardous areas in a contaminated site

    SciTech Connect

    Juang, K.W.; Lee, D.Y.

    1998-09-01

    The probability of incorrectly delineating hazardous areas in a contaminated site is very important for decision-makers because it indicates the magnitude of confidence that decision-makers have in determining areas in need of remediation. In this study, simple indicator kriging (SIK) was used to estimate the probability of incorrectly delineating hazardous areas in a heavy metal-contaminated site, which is located at Taoyuan, Taiwan, and is about 10 ha in area. In the procedure, the values 0 and 1 were assigned to be the stationary means of the indicator codes in the SIK model to represent two hypotheses, hazardous and safe, respectively. The spatial distribution of the conditional probability of heavy metal concentrations lower than a threshold, given each hypothesis, was estimated using SIK. Then, the probabilities of false positives ({alpha}) (i.e., the probability of declaring a location hazardous when it is not) and false negatives ({beta}) (i.e., the probability of declaring a location safe when it is not) in delineating hazardous areas for the heavy metal-contaminated site could be obtained. The spatial distribution of the probabilities of false positives and false negatives could help in delineating hazardous areas based on a tolerable probability level of incorrect delineation. In addition, delineation complicated by the cost of remediation, hazards in the environment, and hazards to human health could be made based on the minimum values of {alpha} and {beta}. The results suggest that the proposed SIK procedure is useful for decision-makers who need to delineate hazardous areas in a heavy metal-contaminated site.

  19. Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State

    USGS Publications Warehouse

    Frans, Lonna M.

    2008-01-01

    Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.

  20. Estimating site occupancy and detection probability parameters for meso- and large mammals in a coastal eosystem

    USGS Publications Warehouse

    O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.

    2006-01-01

    Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.

  1. Estimates of annual survival probabilities for adult Florida manatees (Trichechus manatus latirostris)

    USGS Publications Warehouse

    Langtimm, C.A.; O'Shea, T.J.; Pradel, R.; Beck, C.A.

    1998-01-01

    The population dynamics of large, long-lived mammals are particularly sensitive to changes in adult survival. Understanding factors affecting survival patterns is therefore critical for developing and testing theories of population dynamics and for developing management strategies aimed at preventing declines or extinction in such taxa. Few studies have used modern analytical approaches for analyzing variation and testing hypotheses about survival probabilities in large mammals. This paper reports a detailed analysis of annual adult survival in the Florida manatee (Trichechus manatus latirostris), an endangered marine mammal, based on a mark-recapture approach. Natural and boat-inflicted scars distinctively 'marked' individual manatees that were cataloged in a computer-based photographic system. Photo-documented resightings provided 'recaptures.' Using open population models, annual adult-survival probabilities were estimated for manatees observed in winter in three areas of Florida: Blue Spring, Crystal River, and the Atlantic coast. After using goodness-of-fit tests in Program RELEASE to search for violations of the assumptions of mark-recapture analysis, survival and sighting probabilities were modeled under several different biological hypotheses with Program SURGE. Estimates of mean annual probability of sighting varied from 0.948 for Blue Spring to 0.737 for Crystal River and 0.507 for the Atlantic coast. At Crystal River and Blue Spring, annual survival probabilities were best estimated as constant over the study period at 0.96 (95% CI = 0.951-0.975 and 0.900-0.985, respectively). On the Atlantic coast, where manatees are impacted more by human activities, annual survival probabilities had a significantly lower mean estimate of 0.91 (95% CI = 0.887-0.926) and varied unpredictably over the study period. For each study area, survival did not differ between sexes and was independent of relative adult age. The high constant adult-survival probabilities estimated

  2. On the Estimation of Detection Probabilities for Sampling Stream-Dwelling Fishes.

    SciTech Connect

    Peterson, James T.

    1999-11-01

    To examine the adequacy of fish probability of detection estimates, I examined distributional properties of survey and monitoring data for bull trout (Salvelinus confluentus), brook trout (Salvelinus fontinalis), westslope cutthroat trout (Oncorhynchus clarki lewisi), chinook salmon parr (Oncorhynchus tshawytscha), and steelhead /redband trout (Oncorhynchus mykiss spp.), from 178 streams in the Interior Columbia River Basin. Negative binomial dispersion parameters varied considerably among species and streams, but were significantly (P<0.05) positively related to fish density. Across streams, the variances in fish abundances differed greatly among species and indicated that the data for all species were overdispersed with respect to the Poisson (i.e., the variances exceeded the means). This significantly affected Poisson probability of detection estimates, which were the highest across species and were, on average, 3.82, 2.66, and 3.47 times greater than baseline values. Required sample sizes for species detection at the 95% confidence level were also lowest for the Poisson, which underestimated sample size requirements an average of 72% across species. Negative binomial and Poisson-gamma probability of detection and sample size estimates were more accurate than the Poisson and generally less than 10% from baseline values. My results indicate the Poisson and binomial assumptions often are violated, which results in probability of detection estimates that are biased high and sample size estimates that are biased low. To increase the accuracy of these estimates, I recommend that future studies use predictive distributions than can incorporate multiple sources of uncertainty or excess variance and that all distributional assumptions be explicitly tested.

  3. Effects of prior detections on estimates of detection probability, abundance, and occupancy

    USGS Publications Warehouse

    Riddle, Jason D.; Mordecai, Rua S.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    Survey methods that account for detection probability often require repeated detections of individual birds or repeated visits to a site to conduct Counts or collect presence-absence data. Initial encounters with individual species or individuals of a species could influence detection probabilities for subsequent encounters. For example, observers may be more likely to redetect a species or individual once they are aware of the presence of that species or individual at a particular site. Not accounting for these effects could result in biased estimators of detection probability, abundance, and occupancy. We tested for effects of prior detections in three data sets that differed dramatically by species, geographic location, and method of counting birds. We found strong support (AIC weights from 83% to 100%) for models that allowed for the effects of prior detections. These models produced estimates of detection probability, abundance, and occupancy that differed substantially from those produced by models that ignored the effects of prior detections. We discuss the consequences of the effects of prior detections on estimation for several sampling methods and provide recommendations for avoiding these effects through survey design or by modeling them when they cannot be avoided. 

  4. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  5. Estimating Super Heavy Element Event Random Probabilities Using Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Stoyer, Mark; Henderson, Roger; Kenneally, Jacqueline; Moody, Kenton; Nelson, Sarah; Shaughnessy, Dawn; Wilk, Philip

    2009-10-01

    Because superheavy element (SHE) experiments involve very low event rates and low statistics, estimating the probability that a given event sequence is due to random events is extremely important in judging the validity of the data. A Monte Carlo method developed at LLNL [1] is used on recent SHE experimental data to calculate random event probabilities. Current SHE experimental activities in collaboration with scientists at Dubna, Russia will be discussed. [4pt] [1] N.J. Stoyer, et al., Nucl. Instrum. Methods Phys. Res. A 455 (2000) 433.

  6. Using counts to simultaneously estimate abundance and detection probabilities in a salamander community

    USGS Publications Warehouse

    Dodd, C.K.; Dorazio, R.M.

    2004-01-01

    A critical variable in both ecological and conservation field studies is determining how many individuals of a species are present within a defined sampling area. Labor intensive techniques such as capture-mark-recapture and removal sampling may provide estimates of abundance, but there are many logistical constraints to their widespread application. Many studies on terrestrial and aquatic salamanders use counts as an index of abundance, assuming that detection remains constant while sampling. If this constancy is violated, determination of detection probabilities is critical to the accurate estimation of abundance. Recently, a model was developed that provides a statistical approach that allows abundance and detection to be estimated simultaneously from spatially and temporally replicated counts. We adapted this model to estimate these parameters for salamanders sampled over a six vear period in area-constrained plots in Great Smoky Mountains National Park. Estimates of salamander abundance varied among years, but annual changes in abundance did not vary uniformly among species. Except for one species, abundance estimates were not correlated with site covariates (elevation/soil and water pH, conductivity, air and water temperature). The uncertainty in the estimates was so large as to make correlations ineffectual in predicting which covariates might influence abundance. Detection probabilities also varied among species and sometimes among years for the six species examined. We found such a high degree of variation in our counts and in estimates of detection among species, sites, and years as to cast doubt upon the appropriateness of using count data to monitor population trends using a small number of area-constrained survey plots. Still, the model provided reasonable estimates of abundance that could make it useful in estimating population size from count surveys.

  7. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  8. Estimating the absolute position of a mobile robot using position probability grids

    SciTech Connect

    Burgard, W.; Fox, D.; Hennig, D.; Schmidt, T.

    1996-12-31

    In order to re-use existing models of the environment mobile robots must be able to estimate their position and orientation in such models. Most of the existing methods for position estimation are based on special purpose sensors or aim at tracking the robot`s position relative to the known starting point. This paper describes the position probability grid approach to estimating the robot`s absolute position and orientation in a metric model of the environment. Our method is designed to work with standard sensors and is independent of any knowledge about the starting point. It is a Bayesian approach based on certainty grids. In each cell of such a grid we store the probability that this cell refers to the current position of the robot. These probabilities are obtained by integrating the likelihoods of sensor readings over time. Results described in this paper show that our technique is able to reliably estimate the position of a robot in complex environments. Our approach has proven to be robust with respect to inaccurate environmental models, noisy sensors, and ambiguous situations.

  9. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  10. Estimation of nonuniform quantal parameters with multiple-probability fluctuation analysis: theory, application and limitations.

    PubMed

    Silver, R Angus

    2003-12-15

    Synapses are a key determinant of information processing in the central nervous system. Investigation of the mechanisms underlying synaptic transmission at central synapses is complicated by the inaccessibility of synaptic contacts and the fact that their temporal dynamics are governed by multiple parameters. Multiple-probability fluctuation analysis (MPFA) is a recently developed method for estimating quantal parameters from the variance and mean amplitude of evoked steady-state synaptic responses recorded under a range of release probability conditions. This article describes the theoretical basis and the underlying assumptions of MPFA, illustrating how a simplified multinomial model can be used to estimate mean quantal parameters at synapses where quantal size and release probability are nonuniform. Interpretations of the quantal parameter estimates are discussed in relation to uniquantal and multiquantal models of transmission. Practical aspects of this method are illustrated including a new method for estimating quantal size and variability, approaches for optimising data collection, error analysis and a method for identifying multivesicular release. The advantages and limitations of investigating synaptic function with MPFA are explored and contrasted with those for traditional quantal analysis and more recent optical quantal analysis methods.

  11. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  12. Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous

    USGS Publications Warehouse

    Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.

    2014-01-01

    Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate

  13. How should detection probability be incorporated into estimates of relative abundance?

    USGS Publications Warehouse

    MacKenzie, D.I.; Kendall, W.L.

    2002-01-01

    Determination of the relative abundance of two populations, separated by time or space, is of interest in many ecological situations. We focus on two estimators of relative abundance, which assume that the probability that an individual is detected at least once in the survey is either equal or unequal for the two populations. We present three methods for incorporating the collected information into our inference. The first method, proposed previously, is a traditional hypothesis test for evidence that detection probabilities are unequal. However, we feel that, a priori, it is more likely that detection probabilities are actually different; hence, the burden of proof should be shifted, requiring evidence that detection probabilities are practically equivalent. The second method we present, equivalence testing, is one approach to doing so. Third, we suggest that model averaging could be used by combining the two estimators according to derived model weights. These differing approaches are applied to a mark-recapture experiment on Nuttail's cottontail rabbit (Sylvilagus nuttallii) conducted in central Oregon during 1974 and 1975, which has been previously analyzed by other authors.

  14. Nomogram Estimating the Probability of Intraabdominal Abscesses after Gastrectomy in Patients with Gastric Cancer

    PubMed Central

    Eom, Bang Wool; Joo, Jungnam; Park, Boram; Yoon, Hong Man; Ryu, Keun Won; Kim, Soo Jin

    2015-01-01

    Purpose Intraabdominal abscess is one of the most common reasons for re-hospitalization after gastrectomy. This study aimed to develop a model for estimating the probability of intraabdominal abscesses that can be used during the postoperative period. Materials and Methods We retrospectively reviewed the clinicopathological data of 1,564 patients who underwent gastrectomy for gastric cancer between 2010 and 2012. Twenty-six related markers were analyzed, and multivariate logistic regression analysis was used to develop the probability estimation model for intraabdominal abscess. Internal validation using a bootstrap approach was employed to correct for bias, and the model was then validated using an independent dataset comprising of patients who underwent gastrectomy between January 2008 and March 2010. Discrimination and calibration abilities were checked in both datasets. Results The incidence of intraabdominal abscess in the development set was 7.80% (122/1,564). The surgical approach, operating time, pathologic N classification, body temperature, white blood cell count, C-reactive protein level, glucose level, and change in the hemoglobin level were significant predictors of intraabdominal abscess in the multivariate analysis. The probability estimation model that was developed on the basis of these results showed good discrimination and calibration abilities (concordance index=0.828, Hosmer-Lemeshow chi-statistic P=0.274). Finally, we combined both datasets to produce a nomogram that estimates the probability of intraabdominal abscess. Conclusions This nomogram can be useful for identifying patients at a high risk of intraabdominal abscess. Patients at a high risk may benefit from further evaluation or treatment before discharge. PMID:26816657

  15. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  16. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  17. Estimation of pitch angle diffusion rates and precipitation time scales of electrons due to EMIC waves in a realistic field model

    NASA Astrophysics Data System (ADS)

    Kang, Suk-Bin; Min, Kyoung-Wook; Fok, Mei-Ching; Hwang, Junga; Choi, Cheong-Rim

    2015-10-01

    Electromagnetic ion cyclotron (EMIC) waves are closely related to precipitating loss of relativistic electrons in the radiation belts, and thereby, a model of the radiation belts requires inclusion of the pitch angle diffusion caused by EMIC waves. We estimated the pitch angle diffusion rates and the corresponding precipitation time scales caused by H and He band EMIC waves using the Tsyganenko 04 (T04) magnetic field model at their probable regions in terms of geomagnetic conditions. The results correspond to enhanced pitch angle diffusion rates and reduced precipitation time scales compared to those based on the dipole model, up to several orders of magnitude for storm times. While both the plasma density and the magnetic field strength varied in these calculations, the reduction of the magnetic field strength predicted by the T04 model was found to be the main cause of the enhanced diffusion rates relative to those with the dipole model for the same Li values, where Li is defined from the ionospheric foot points of the field lines. We note that the bounce-averaged diffusion rates were roughly proportional to the inversion of the equatorial magnetic field strength and thus suggest that scaling the diffusion rates with the magnetic field strength provides a good approximation to account for the effect of the realistic field model in the EMIC wave-pitch angle diffusion modeling.

  18. Estimating state-transition probabilities for unobservable states using capture-recapture/resighting data

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.

    2002-01-01

    Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.

  19. Calculation of the number of Monte Carlo histories for a planetary protection probability of impact estimation

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack

    2016-07-01

    Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single

  20. Estimation of risk probability for gravity-driven pyroclastic flows at Volcan Colima, Mexico

    NASA Astrophysics Data System (ADS)

    Sheridan, Michael F.; Macías, JoséLuis

    1995-07-01

    Mapped pyroclastic flow terminations at Colima volcano were used to determine energy lines. We assumed straight energy lines, initial flow velocities of zero and flow movement starting from the volcano summit. Heim coefficients ( H/L) of the flows plotted on a histogram cluster in two distinct modes. One corresponds to large pyroclastic flows (pumice flows and block-and-ash flows) for which Heim coefficients range from 0.22 to 0.28. This group has a mean value of 0.24 and a standard deviation of 0.021. The other mode corresponds to small block-and-ash avalanches which have Heim coefficients that range from 0.33 to 0.38, a mean value of 0.35 and a standard deviation of 0.025. No flow terminations yield Heim coefficients in the range from 0.28 to 0.33. This break probably separates fluidized pyroclastic flows from less mobile hot rock avalanches. Plots of Heim coefficients on arithmetic probability paper are approximate probability functions for the two types of flows. Heim coefficients calculated for straight lines that connect population centers with the volcano summit can be used with this type of graph to estimate the probability that either type of pyroclastic flow would reach the site. We used this technique to determine risk probabilities for various localities around Colima volcano. These calculations indicate that Laguna Verde, Yerbabuena, Cofradia-El Fresnal, El Naranjal, Atenguillo, La Becerrera, Montitlan and San Antonio have a probability ranging from 99 to 6% of being covered by large pyroclastic flows. Laguna Verde and Yerbabuena are the sites with the highest probability of being reached by small block-and-ash avalanches. The depression situated south-southwest of Colima volcano is an area with a very high probability of being affected by the pyroclastic phenomena considered above. The small avalanche produced by dome collapse of Colima on April 16, 1991 traveled along the barranca El Cordobán toward the area of the highest probability on our map.

  1. Innovative Meta-Heuristic Approach Application for Parameter Estimation of Probability Distribution Model

    NASA Astrophysics Data System (ADS)

    Lee, T. S.; Yoon, S.; Jeong, C.

    2012-12-01

    The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the

  2. An estimate of the probability of capture of a binary star by a supermassive black hole

    NASA Astrophysics Data System (ADS)

    Dremova, G. N.; Dremov, V. V.; Tutukov, A. V.

    2016-08-01

    A simple model for the dynamics of stars located in a sphere with a radius of one-tenth of the central parsec, designed to enable estimation of the probability of capture in the close vicinity ( r < 10-3 pc) of a supermassive black hole (SMBH) is presented. In the case of binary stars, such a capture with a high probability results in the formation of a hyper-velocity star. The population of stars in a sphere of radius <0.1 pc is calculated based on data for the Galactic rotation curve. To simulate the distortion of initially circular orbits of stars, these are subjected to a series of random shock encounters ("kicks"), whose net effect is to "push" these binary systems into the region of potential formation of hyper-velocity stars. The mean crossing time of the border of the close vicinity of the SMBH ( r < 10-3 pc) by the stellar orbit can be used to estimate the probability that a binary system is captured, followed by the possible ejection of a hyper-velocity star.

  3. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  4. Empirical comparison of uniform and non-uniform probability sampling for estimating numbers of red-cockaded woodpecker colonies

    USGS Publications Warehouse

    Geissler, P.H.; Moyer, L.M.

    1983-01-01

    Four sampling and estimation methods for estimating the number of red-cockaded woodpecker colonies on National Forests in the Southeast were compared, using samples chosen from simulated populations based on the observed sample. The methods included (1) simple random sampling without replacement using a mean per sampling unit estimator, (2) simple random sampling without replacement with a ratio per pine area estimator, (3) probability proportional to 'size' sampling with replacement, and (4) probability proportional to 'size' without replacement using Murthy's estimator. The survey sample of 274 National Forest compartments (1000 acres each) constituted a superpopulation from which simulated stratum populations were selected with probability inversely proportional to the original probability of selection. Compartments were originally sampled with probabilities proportional to the probabilities that the compartments contained woodpeckers ('size'). These probabilities were estimated with a discriminant analysis based on tree species and tree age. The ratio estimator would have been the best estimator for this survey based on the mean square error. However, if more accurate predictions of woodpecker presence had been available, Murthy's estimator would have been the best. A subroutine to calculate Murthy's estimates is included; it is computationally feasible to analyze up to 10 samples per stratum.

  5. Estimating survival and breeding probability for pond-breeding amphibians: a modified robust design

    USGS Publications Warehouse

    Bailey, L.L.; Kendall, W.L.; Church, D.R.; Wilbur, H.M.

    2004-01-01

    Many studies of pond-breeding amphibians involve sampling individuals during migration to and from breeding habitats. Interpreting population processes and dynamics from these studies is difficult because (1) only a proportion of the population is observable each season, while an unknown proportion remains unobservable (e.g., non-breeding adults) and (2) not all observable animals are captured. Imperfect capture probability can be easily accommodated in capture?recapture models, but temporary transitions between observable and unobservable states, often referred to as temporary emigration, is known to cause problems in both open- and closed-population models. We develop a multistate mark?recapture (MSMR) model, using an open-robust design that permits one entry and one exit from the study area per season. Our method extends previous temporary emigration models (MSMR with an unobservable state) in two ways. First, we relax the assumption of demographic closure (no mortality) between consecutive (secondary) samples, allowing estimation of within-pond survival. Also, we add the flexibility to express survival probability of unobservable individuals (e.g., ?non-breeders?) as a function of the survival probability of observable animals while in the same, terrestrial habitat. This allows for potentially different annual survival probabilities for observable and unobservable animals. We apply our model to a relictual population of eastern tiger salamanders (Ambystoma tigrinum tigrinum). Despite small sample sizes, demographic parameters were estimated with reasonable precision. We tested several a priori biological hypotheses and found evidence for seasonal differences in pond survival. Our methods could be applied to a variety of pond-breeding species and other taxa where individuals are captured entering or exiting a common area (e.g., spawning or roosting area, hibernacula).

  6. Modeling and estimation of stage-specific daily survival probabilities of nests

    USGS Publications Warehouse

    Stanley, T.R.

    2000-01-01

    In studies of avian nesting success, it is often of interest to estimate stage-specific daily survival probabilities of nests. When data can be partitioned by nesting stage (e.g., incubation stage, nestling stage), piecewise application of the Mayfield method or Johnsona??s method is appropriate. However, when the data contain nests where the transition from one stage to the next occurred during the interval between visits, piecewise approaches are inappropriate. In this paper, I present a model that allows joint estimation of stage-specific daily survival probabilities even when the time of transition between stages is unknown. The model allows interval lengths between visits to nests to vary, and the exact time of failure of nests does not need to be known. The performance of the model at various sample sizes and interval lengths between visits was investigated using Monte Carlo simulations, and it was found that the model performed quite well: bias was small and confidence-interval coverage was at the nominal 95% rate. A SAS program for obtaining maximum likelihood estimates of parameters, and their standard errors, is provided in the Appendix.

  7. Estimating superpopulation size and annual probability of breeding for pond-breeding salamanders

    USGS Publications Warehouse

    Kinkead, K.E.; Otis, D.L.

    2007-01-01

    It has long been accepted that amphibians can skip breeding in any given year, and environmental conditions act as a cue for breeding. In this paper, we quantify temporary emigration or nonbreeding probability for mole and spotted salamanders (Ambystoma talpoideum and A. maculatum). We estimated that 70% of mole salamanders may skip breeding during an average rainfall year and 90% may skip during a drought year. Spotted salamanders may be more likely to breed, with only 17% avoiding the breeding pond during an average rainfall year. We illustrate how superpopulations can be estimated using temporary emigration probability estimates. The superpopulation is the total number of salamanders associated with a given breeding pond. Although most salamanders stay within a certain distance of a breeding pond for the majority of their life spans, it is difficult to determine true overall population sizes for a given site if animals are only captured during a brief time frame each year with some animals unavailable for capture at any time during a given year. ?? 2007 by The Herpetologists' League, Inc.

  8. A logistic regression equation for estimating the probability of a stream in Vermont having intermittent flow

    USGS Publications Warehouse

    Olson, Scott A.; Brouillette, Michael C.

    2006-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.

  9. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  10. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  11. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a

  12. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  13. Extreme Floods and Probability Estimates for Dams: A 2D Distributed Model and Paleoflood Data Approach

    NASA Astrophysics Data System (ADS)

    England, J. F.

    2006-12-01

    Estimates of extreme floods and probabilities are needed in dam safety risk analysis. A multidisciplinary approach was developed to estimate extreme floods that integrated four main elements: radar hydrometeorology, stochastic storm transposition, paleoflood data, and 2d distributed rainfall-runoff modeling. The research focused on developing and applying a two-dimensional, distributed model to simulate extreme floods on the 12,000 km2 Arkansas River above Pueblo, Colorado with return periods up to 10,000 years. The four objectives were to: (1) develop a two-dimensional model suitable for large watersheds (area greater than 2,500 km2); (2) calibrate and validate the model to the June 1921 and May 1894 floods on the Arkansas River; (3) develop a flood frequency curve with the model using the stochastic storm transposition technique; and (4) conduct a sensitivity analysis for initial soil saturation, storm duration and area, and compare the flood frequency curve with gage and paleoflood data. The Two-dimensional Runoff, Erosion and EXport (TREX) model was developed as part of this research. Basin-average rainfall depths and probabilities were estimated using DAD data and stochastic storm transposition with elliptical storms for input to TREX. From these extreme rainstorms, the TREX model was used to estimate a flood frequency curve for this large watershed. Model-generated peak flows were as large as 90,000 to 282,000 ft3/s at Pueblo for 100- to 10,000-year return periods, respectively. Model-generated frequency curves were generally comparable to peak flow and paleoflood data-based frequency curves after radar-based storm location and area limits were applied. The model provides a unique physically-based method for determining flood frequency curves under varied scenarios of antecedent moisture conditions, space and time variability of rainfall and watershed characteristics, and storm center locations.

  14. Estimating the probability of an extinction or major outbreak for an environmentally transmitted infectious disease.

    PubMed

    Lahodny, G E; Gautam, R; Ivanek, R

    2015-01-01

    Indirect transmission through the environment, pathogen shedding by infectious hosts, replication of free-living pathogens within the environment, and environmental decontamination are suspected to play important roles in the spread and control of environmentally transmitted infectious diseases. To account for these factors, the classic Susceptible-Infectious-Recovered-Susceptible epidemic model is modified to include a compartment representing the amount of free-living pathogen within the environment. The model accounts for host demography, direct and indirect transmission, replication of free-living pathogens in the environment, and removal of free-living pathogens by natural death or environmental decontamination. Based on the assumptions of the deterministic model, a continuous-time Markov chain model is developed. An estimate for the probability of disease extinction or a major outbreak is obtained by approximating the Markov chain with a multitype branching process. Numerical simulations illustrate important differences between the deterministic and stochastic counterparts, relevant for outbreak prevention, that depend on indirect transmission, pathogen shedding by infectious hosts, replication of free-living pathogens, and environmental decontamination. The probability of a major outbreak is computed for salmonellosis in a herd of dairy cattle as well as cholera in a human population. An explicit expression for the probability of disease extinction or a major outbreak in terms of the model parameters is obtained for systems with no direct transmission or replication of free-living pathogens. PMID:25198247

  15. Estimation of (n,f) Cross-Sections by Measuring Reaction Probability Ratios

    SciTech Connect

    Plettner, C; Ai, H; Beausang, C W; Bernstein, L A; Ahle, L; Amro, H; Babilon, M; Burke, J T; Caggiano, J A; Casten, R F; Church, J A; Cooper, J R; Crider, B; Gurdal, G; Heinz, A; McCutchan, E A; Moody, K; Punyon, J A; Qian, J; Ressler, J J; Schiller, A; Williams, E; Younes, W

    2005-04-21

    Neutron-induced reaction cross-sections on unstable nuclei are inherently difficult to measure due to target activity and the low intensity of neutron beams. In an alternative approach, named the 'surrogate' technique, one measures the decay probability of the same compound nucleus produced using a stable beam on a stable target to estimate the neutron-induced reaction cross-section. As an extension of the surrogate method, in this paper they introduce a new technique of measuring the fission probabilities of two different compound nuclei as a ratio, which has the advantage of removing most of the systematic uncertainties. This method was benchmarked in this report by measuring the probability of deuteron-induced fission events in coincidence with protons, and forming the ratio P({sup 236}U(d,pf))/P({sup 238}U(d,pf)), which serves as a surrogate for the known cross-section ratio of {sup 236}U(n,f)/{sup 238}U(n,f). IN addition, the P({sup 238}U(d,d{prime}f))/P({sup 236}U(d,d{prime}f)) ratio as a surrogate for the {sup 237}U(n,f)/{sup 235}U(n,f) cross-section ratio was measured for the first time in an unprecedented range of excitation energies.

  16. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  17. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil

    2016-04-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a innovative statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use, accurate and can be applied to any region and river. Varouchakis, E. A., Giannakis, G. V., Lilli, M. A., Ioannidou, E., Nikolaidis, N. P., and Karatzas, G. P.: Development of a statistical tool for the estimation of riverbank erosion probability, SOIL (EGU), in print, 2016.

  18. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2010-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish .

  19. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage.

  20. Inconsistent probability estimates of a hypothesis: the role of contrasting support.

    PubMed

    Bonini, Nicolao; Gonzalez, Michel

    2005-01-01

    This paper studies consistency in the judged probability of a target hypothesis in lists of mutually exclusive nonexhaustive hypotheses. Specifically, it controls the role played by the support of displayed competing hypotheses and the relatedness between the target hypothesis and its alternatives. Three experiments are reported. In all experiments, groups of people were presented with a list of mutually exclusive nonexhaustive causes of a person's death. In the first two experiments, they were asked to judge the probability of each cause as that of the person's decease. In the third experiment, people were asked for a frequency estimation task. Target causes were presented in all lists. Several other alternative causes to the target ones differed across the lists. Findings show that the judged probability/frequency of a target cause changes as a function of the support of the displayed competing causes. Specifically, it is higher when its competing displayed causes have low rather than high support. Findings are consistent with the contrastive support hypothesis within the support theory. PMID:15779531

  1. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  2. Estimation of upper bound probabilities for rare events resulting from nearby explosions

    SciTech Connect

    Luck, L.B.

    1998-09-19

    It is sometimes necessary to deploy, transport and store weapons containing high explosives (HE) in proximity. Accident analyses of these activities may include nearby explosion scenarios in which fragments from an exploding (donor) weapon impact a second (acceptor) weapon. Weapon arrays are designed to miti- gate consequences to potential acceptor weapons, but unless initiation of an accep- tor's HE is impossible, outcomes such as detonation must be considered. This paper describes an approach for estimating upper bound probabilities for fragment- dominated scenarios in which outcomes are expected to be rare events. Other aspectsl,z of nearby explosion problems were addressed previously. An example scenario is as follows. A donor weapon is postulated to detonate, and fragments of the donor weapon casing are accelerated outward. Some of the fragments may strike a nearby acceptor weapon whose HE is protected by casing materials. Most impacts are not capable of initiating the acceptor's HE. However, a sufficiently large and fast fragment could produce a shock-to-detonation transi- tion (SDT), which will result in detonation of the acceptor. Our approach will work for other outcomes of fragment impact, but this discussion focuses on detonation. Experiments show that detonating weapons typically produce a distribution of casing fragment sizes in which unusually large figments sometimes occur. Such fragments can occur because fragmentation physics includes predictable aspects as well as those best treated as random phenomena, such as the sizes of individual fragments. Likewise, some of the descriptors of fragment impact can be described as random phenomen% such as fragment orientation at impact (fragments typically are tumbling). Consideration of possibilities resulting from the various manifesta- tions of randomness can lead to worst-case examples tha~ in turn, lead to the out- comes of concern. For example, an unusually large fragment strikes an acceptor weapon with

  3. Accretion of Fine Particles: Sticking Probability Estimated by Optical Sizing of Fractal Aggregates

    NASA Astrophysics Data System (ADS)

    Sugiura, N.; Higuchi, Y.

    1993-07-01

    Sticking probability of fine particles is an important parameter that determines (1) the settling of fine particles to the equatorial plane of the solar nebula and hence the formation of planetesimals, and (2) the thermal structure of the nebula, which is dependent on the particle size through opacity. It is generally agreed that the sticking probability is 1 for submicrometer particles, but at sizes larger than 1 micrometer, there exist almost no data on the sticking probability. A recent study [1] showed that aggregates (with radius from 0.2 to 2 mm) did not stick when collided at a speed of 0.15 to 4 m/s. Therefore, somewhere between 1 micrometer and 200 micrometers, sticking probabilities of fine particles change from nearly 1 to nearly 0. We have been studying [2,3] sticking probabilities of dust aggregates in this size range using an optical sizing method. The optical sizing method has been well established for spherical particles. This method utilizes the fact that the smaller the size, the larger the angle of the scattered light. For spheres with various sizes, the size distribution is determined by solving Y(i) = M(i,j)X(j), where Y(i) is the scattered light intensity at angle i, X(j) is the number density of spheres with size j, and M(i,j) is the scattering matrix, which is determined by Mie theory. Dust aggregates, which we expect to be present in the early solar nebula, are not solid spheres, but probably have a porous fractal structure. For such aggregates the scattering matrix M(i,j) must be determined by taking account of all the interaction among constituent particles (discrete dipole approximation). Such calculation is possible only for very small aggregates, and for larger aggregates we estimate the scattering matrix by extrapolation, assuming that the fractal nature of the aggregates allows such extrapolation. In the experiments using magnesium oxide fine particles floating in a chamber at ambient pressure, the size distribution (determined by

  4. ANNz2: Photometric Redshift and Probability Distribution Function Estimation using Machine Learning

    NASA Astrophysics Data System (ADS)

    Sadeh, I.; Abdalla, F. B.; Lahav, O.

    2016-10-01

    We present ANNz2, a new implementation of the public software for photometric redshift (photo-z) estimation of Collister & Lahav, which now includes generation of full probability distribution functions (PDFs). ANNz2 utilizes multiple machine learning methods, such as artificial neural networks and boosted decision/regression trees. The objective of the algorithm is to optimize the performance of the photo-z estimation, to properly derive the associated uncertainties, and to produce both single-value solutions and PDFs. In addition, estimators are made available, which mitigate possible problems of non-representative or incomplete spectroscopic training samples. ANNz2 has already been used as part of the first weak lensing analysis of the Dark Energy Survey, and is included in the experiment's first public data release. Here we illustrate the functionality of the code using data from the tenth data release of the Sloan Digital Sky Survey and the Baryon Oscillation Spectroscopic Survey. The code is available for download at http://github.com/IftachSadeh/ANNZ.

  5. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  6. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  7. Real Time Data Management for Estimating Probabilities of Incidents and Near Misses

    NASA Astrophysics Data System (ADS)

    Stanitsas, P. D.; Stephanedes, Y. J.

    2011-08-01

    Advances in real-time data collection, data storage and computational systems have led to development of algorithms for transport administrators and engineers that improve traffic safety and reduce cost of road operations. Despite these advances, problems in effectively integrating real-time data acquisition, processing, modelling and road-use strategies at complex intersections and motorways remain. These are related to increasing system performance in identification, analysis, detection and prediction of traffic state in real time. This research develops dynamic models to estimate the probability of road incidents, such as crashes and conflicts, and incident-prone conditions based on real-time data. The models support integration of anticipatory information and fee-based road use strategies in traveller information and management. Development includes macroscopic/microscopic probabilistic models, neural networks, and vector autoregressions tested via machine vision at EU and US sites.

  8. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions

    USGS Publications Warehouse

    Wenger, S.J.; Freeman, Mary C.

    2008-01-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence-absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  9. EROS --- automated software system for ephemeris calculation and estimation of probability domain (Abstract)

    NASA Astrophysics Data System (ADS)

    Skripnichenko, P.; Galushina, T.; Loginova, M.

    2015-08-01

    This work is devoted to the description of the software EROS (Ephemeris Research and Observation Services), which is being developed both by the astronomy department of Ural Federal University and Tomsk State University. This software provides the ephemeris support for the positional observations. The most interesting feature of the software is an automatization of all the processes preparation for observations from the determination of the night duration to the ephemeris calculation and forming of a program observation schedule. The accuracy of ephemeris calculation mostly depends on initial data precision that defined from errors of observations which used to determination of orbital elements. In the case if object has a small number of observations which spread at short arc of orbit there is a real necessity to calculate not only at nominal orbit but probability domain both. In this paper under review ephemeris we will be understand a field on the celestial sphere which calculated based on the probability domain. Our software EROS has a relevant functional for estimation of review ephemeris. This work contains description of software system and results of the program using.

  10. Detection probabilities and site occupancy estimates for amphibians at Okefenokee National Wildlife Refuge

    USGS Publications Warehouse

    Smith, L.L.; Barichivich, W.J.; Staiger, J.S.; Smith, Kimberly G.; Dodd, C.K.

    2006-01-01

    We conducted an amphibian inventory at Okefenokee National Wildlife Refuge from August 2000 to June 2002 as part of the U.S. Department of the Interior's national Amphibian Research and Monitoring Initiative. Nineteen species of amphibians (15 anurans and 4 caudates) were documented within the Refuge, including one protected species, the Gopher Frog Rana capito. We also collected 1 y of monitoring data for amphibian populations and incorporated the results into the inventory. Detection probabilities and site occupancy estimates for four species, the Pinewoods Treefrog (Hyla femoralis), Pig Frog (Rana grylio), Southern Leopard Frog (R. sphenocephala) and Carpenter Frog (R. virgatipes) are presented here. Detection probabilities observed in this study indicate that spring and summer surveys offer the best opportunity to detect these species in the Refuge. Results of the inventory suggest that substantial changes may have occurred in the amphibian fauna within and adjacent to the swamp. However, monitoring the amphibian community of Okefenokee Swamp will prove difficult because of the logistical challenges associated with a rigorous statistical assessment of status and trends.

  11. Estimating Landholders’ Probability of Participating in a Stewardship Program, and the Implications for Spatial Conservation Priorities

    PubMed Central

    Adams, Vanessa M.; Pressey, Robert L.; Stoeckl, Natalie

    2014-01-01

    The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements - conservation covenants and management agreements - based on payment level and proportion of properties required to be managed. We then spatially predicted landholders’ probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation. PMID:24892520

  12. Estimating landholders' probability of participating in a stewardship program, and the implications for spatial conservation priorities.

    PubMed

    Adams, Vanessa M; Pressey, Robert L; Stoeckl, Natalie

    2014-01-01

    The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements--conservation covenants and management agreements--based on payment level and proportion of properties required to be managed. We then spatially predicted landholders' probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation.

  13. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  14. Comparison of disjunctive kriging to generalized probability kriging in application to the estimation of simulated and real data

    SciTech Connect

    Carr, J.R. . Dept. of Geological Sciences); Mao, Nai-hsien )

    1992-01-01

    Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.

  15. Estimation of the failure probability during EGS stimulation based on borehole data

    NASA Astrophysics Data System (ADS)

    Meller, C.; Kohl, Th.; Gaucher, E.

    2012-04-01

    In recent times the search for alternative sources of energy has been fostered by the scarcity of fossil fuels. With its ability to permanently provide electricity or heat with little emission of CO2, geothermal energy will have an important share in the energy mix of the future. Within Europe, scientists identified many locations with conditions suitable for Enhanced Geothermal System (EGS) projects. In order to provide sufficiently high reservoir permeability, EGS require borehole stimulations prior to installation of power plants (Gérard et al, 2006). Induced seismicity during water injection into reservoirs EGS systems is a factor that currently cannot be predicted nor controlled. Often, people living near EGS projects are frightened by smaller earthquakes occurring during stimulation or injection. As this fear can lead to widespread disapproval of geothermal power plants, it is appreciable to find a way to estimate the probability of fractures to shear when injecting water with a distinct pressure into a geothermal reservoir. This provides knowledge, which enables to predict the mechanical behavior of a reservoir in response to a change in pore pressure conditions. In the present study an approach for estimation of the shearing probability based on statistical analyses of fracture distribution, orientation and clusters, together with their geological properties is proposed. Based on geophysical logs of five wells in Soultz-sous-Forêts, France, and with the help of statistical tools, the Mohr criterion, geological and mineralogical properties of the host rock and the fracture fillings, correlations between the wells are analyzed. This is achieved with the self-written MATLAB-code Fracdens, which enables us to statistically analyze the log files in different ways. With the application of a pore pressure change, the evolution of the critical pressure on the fractures can be determined. A special focus is on the clay fillings of the fractures and how they reduce

  16. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    NASA Astrophysics Data System (ADS)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We

  17. Estimated probabilities and volumes of postwildfire debris flows, a prewildfire evaluation for the upper Blue River watershed, Summit County, Colorado

    USGS Publications Warehouse

    Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.

    2011-01-01

    The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.

  18. Estimating site occupancy rates for aquatic plants using spatial sub-sampling designs when detection probabilities are less than one

    USGS Publications Warehouse

    Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.

    2011-01-01

    Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.

  19. R-tools for estimating exceedance probabilities of Envelope Curves of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Castellarin, Attilio

    2013-04-01

    Envelope curves of flood flows are classical hydrological tools that graphically summarize the current bound on our experience of extreme floods in a region. Castellarin et al. [2005] introduced Probabilistic Regional Envelope Curves (PRECs) and formulated an empirical estimator of the recurrence interval T associated with the curves themselves. PRECs can be used to estimate the T -year flood (design-flood) for any basin in a given region as a function of the catchment area alone. We present a collection of R-functions that can be used for (1) constructing the empirical envelope curve of flood flows for a given hydrological region and (2) estimating the curve's T on the basis of a mathematical representation of the cross-correlation structure of observed flood sequences. The R functions, which we tested on synthetic regional datasets of annual sequences characterized by different degrees of cross-correlation generated through Monte Carlo resampling, implement the algorithm proposed in Castellarin [2007], providing the user with straightforward means for predicting the exceedance probability 1-T associated with a regional envelope curve, and therefore the T -year flood in any ungauged basin in the region for large and very large T values. Furthermore, the algorithm can be easily coupled with other regional flood frequency analysis procedures to effectively improve the accuracy of flood quantile estimates at high T values [Guse et al., 2010], or extended to rainfall extremes for predicting extreme point-rainfall depths associated with a given duration and recurrence interval in any ungauged site within a region [Viglione et al., 2012]. References Castellarin (2007): Probabilistic envelope curves for design flood estimation at ungauged sites, Water Resour. Res., 43, W04406. Castellarin, Vogel, Matalas (2005): Probabilistic behavior of a regional envelope curve, Water Resour. Res., 41, W06018. Guse, Hofherr, Merz (2010): Introducing empirical and probabilistic regional

  20. A logistic regression equation for estimating the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Archfield, Stacey A.

    2002-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing perennially at a specific site in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing whether streams are perennial or intermittent at a specific site in Massachusetts. This information is needed to assist these environmental agencies, who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending along the length of each side of the stream from the mean annual high-water line along each side of perennial streams, with exceptions in some urban areas. The equation was developed by relating the verified perennial or intermittent status of a stream site to selected basin characteristics of naturally flowing streams (no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, waste-water discharge, and so forth) in Massachusetts. Stream sites used in the analysis were identified as perennial or intermittent on the basis of review of measured streamflow at sites throughout Massachusetts and on visual observation at sites in the South Coastal Basin, southeastern Massachusetts. Measured or observed zero flow(s) during months of extended drought as defined by the 310 Code of Massachusetts Regulations (CMR) 10.58(2)(a) were not considered when designating the perennial or intermittent status of a stream site. The database used to develop the equation included a total of 305 stream sites (84 intermittent- and 89 perennial-stream sites in the State, and 50 intermittent- and 82 perennial-stream sites in the South Coastal Basin). Stream sites included in the database had drainage areas that ranged from 0.14 to 8.94 square miles in the State and from 0.02 to 7.00 square miles in the South Coastal Basin.Results of the logistic regression analysis

  1. Estimating a neutral reference for electroencephalographic recordings: The importance of using a high-density montage and a realistic head model

    PubMed Central

    Liu, Quanying; Balsters, Joshua H.; Baechinger, Marc; van der Groen, Onno; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Objective In electroencephalography (EEG) measurements, the signal of each recording electrode is contrasted with a reference electrode or a combination of electrodes. The estimation of a neutral reference is a long-standing issue in EEG data analysis, which has motivated the proposal of different re-referencing methods, among which linked-mastoid re-referencing (LMR), average re-referencing (AR) and reference electrode standardization technique (REST). In this study we quantitatively assessed the extent to which the use of a high-density montage and a realistic head model can impact on the optimal estimation of a neutral reference for EEG recordings. Approach Using simulated recordings generated by projecting specific source activity over the sensors, we assessed to what extent AR, REST and LMR may distort the scalp topography. We examined the impact electrode coverage has on AR and REST, and how accurate the REST reconstruction is for realistic and less realistic (three-layer and single-layer spherical) head models, and with possible uncertainty in the electrode positions. We assessed LMR, AR and REST also in the presence of typical EEG artifacts that are mixed in the recordings. Finally, we applied them to real EEG data collected in a target detection experiment to corroborate our findings on simulated data. Main results Both AR and REST have relatively low reconstruction errors compared to LMR, and that REST is less sensitive than AR and LMR to artifacts mixed in the EEG data. For both AR and REST, high electrode density yields low re-referencing reconstruction errors. A realistic head model is critical for REST, leading to a more accurate estimate of a neutral reference compared to spherical head models. With a low-density montage, REST shows a more reliable reconstruction than AR either with a realistic or a three-layer spherical head model. Conversely, with a high-density montage AR yields better results unless precise information on electrode positions is

  2. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David; Ishii, Audrey

    2016-06-28

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, generalized skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at

  3. A probability model for evaluating the bias and precision of influenza vaccine effectiveness estimates from case-control studies.

    PubMed

    Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A

    2015-05-01

    As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.

  4. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  5. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  6. Using of bayesian networks to estimate the probability of "NATECH" scenario occurrence

    NASA Astrophysics Data System (ADS)

    Dobes, Pavel; Dlabka, Jakub; Jelšovská, Katarína; Polorecká, Mária; Baudišová, Barbora; Danihelka, Pavel

    2015-04-01

    In the twentieth century, implementation of Bayesian statistics and probability was not much used (may be it wasn't a preferred approach) in the area of natural and industrial risk analysis and management. Neither it was used within analysis of so called NATECH accidents (chemical accidents triggered by natural events, such as e.g. earthquakes, floods, lightning etc.; ref. E. Krausmann, 2011, doi:10.5194/nhess-11-921-2011). Main role, from the beginning, played here so called "classical" frequentist probability (ref. Neyman, 1937), which rely up to now especially on the right/false results of experiments and monitoring and didn't enable to count on expert's beliefs, expectations and judgements (which is, on the other hand, one of the once again well known pillars of Bayessian approach to probability). In the last 20 or 30 years, there is possible to observe, through publications and conferences, the Renaissance of Baysssian statistics into many scientific disciplines (also into various branches of geosciences). The necessity of a certain level of trust in expert judgment within risk analysis is back? After several decades of development on this field, it could be proposed following hypothesis (to be checked): "We couldn't estimate probabilities of complex crisis situations and their TOP events (many NATECH events could be classified as crisis situations or emergencies), only by classical frequentist approach, but also by using of Bayessian approach (i.e. with help of prestaged Bayessian Network including expert belief and expectation as well as classical frequentist inputs). Because - there is not always enough quantitative information from monitoring of historical emergencies, there could be several dependant or independant variables necessary to consider and in generally - every emergency situation always have a little different run." In this topic, team of authors presents its proposal of prestaged typized Bayessian network model for specified NATECH scenario

  7. Estimate of the probability of a lightning strike to the Galileo probe

    NASA Astrophysics Data System (ADS)

    Borucki, W. J.

    1985-04-01

    Lightning strikes to aerospace vehicles occur mainly in or near clouds. As the Galileo entry probe will pass most of its operational life in the clouds of Jupiter, which is known to have lightning activity, the present study is concerned with the risk of a lightning strike to the probe. A strike to the probe could cause physical damage to the structure and/or damage to the electronic equipment aboard the probe. It is thought to be possible, for instance, that the instrument failures which occurred on all four Pioneer Venus entry probes at an altitude of 12 km were due to an external electric discharge. The probability of a lightning strike to the Galileo probe is evaluated. It is found that the estimate of a strike to the probe is only 0.001, which is about the same as the expected failure rate due to other design factors. In the case of entry probes to cloud-covered planets, a consideration of measures for protecting the vehicle and its payload from lightning appears to be appropriate.

  8. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  9. Maximum Entropy Estimation of Probability Distribution of Variables in Higher Dimensions from Lower Dimensional Data

    NASA Astrophysics Data System (ADS)

    ', Jayajit; Mukherjee, Sayak; Hodge, Susan

    2015-07-01

    A common statistical situation concerns inferring an unknown distribution Q(x) from a known distribution P(y), where X (dimension n), and Y (dimension m) have a known functional relationship. Most commonly, nm. In general, in the absence of additional information, there is no unique solution to Q in those cases. Nevertheless, one may still want to draw some inferences about Q. To this end, we propose a novel maximum entropy (MaxEnt) approach that estimates Q(x) based only on the available data, namely, P(y). The method has the additional advantage that one does not need to explicitly calculate the Lagrange multipliers. In this paper we develop the approach, for both discrete and continuous probability distributions, and demonstrate its validity. We give an intuitive justification as well, and we illustrate with examples.

  10. A New Approach to Estimating the Probability for β-delayed Neutron Emission

    SciTech Connect

    McCutchan, E.A.; Sonzogni, A.A.; Johnson, T.D.; Abriola, D.; Birch, M.; Singh, B.

    2014-06-15

    The probability for neutron emission following β decay, Pn, is a crucial property for a wide range of physics and applications including nuclear structure, r-process nucleosynthesis, the control of nuclear reactors, and the post-processing of nuclear fuel. Despite much experimental effort, knowledge of Pn values is still lacking in very neutron-rich nuclei, requiring predictions from either systematics or theoretical models. Traditionally, systematic predictions were made by investigating the Pn value as a function of the decay Q value and the neutron separation energy in the daughter nucleus. A new approach to Pn systematics is presented which incorporates the half-life of the decay and the Q value for β-delayed neutron emission. This prescription correlates the known data better, and thus improves the estimation of Pn values for neutron-rich nuclei. Such an approach can be applied to generate input values for r-process network calculations or in the modeling of advanced fuel cycles.

  11. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    PubMed

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). PMID:26709414

  12. Estimation of probable maximum precipitation at the Kielce Upland (Poland) using meteorological method

    NASA Astrophysics Data System (ADS)

    Suligowski, Roman

    2014-05-01

    Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in

  13. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-06-01

    Dissolved dense nonaqueous-phase liquid plumes are persistent, widespread problems in the DOE complex. At the Idaho National Engineering and Environmental Laboratory, dissolved trichloroethylene (TCE) is disappearing from the Snake River Plain aquifer (SRPA) by natural attenuation, a finding that saves significant site restoration costs. Acceptance of monitored natural attenuation as a preferred treatment technology requires direct evidence of the processes and rates of the degradation. Our proposal aims to provide that evidence for one such site by testing two hypotheses. First, we believe that realistic values for in situ rates of TCE cometabolism can be obtained by sustaining the putative microorganisms at the low catabolic activities consistent with aquifer conditions. Second, the patterns of functional gene expression evident in these communities under starvation conditions while carrying out TCE cometabolism can be used to diagnose the cometabolic activity in the aquifer itself. Using the cometabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained at this location and validate the long-term stewardship of this plume. Realistic terms for cometabolism of TCE will provide marked improvements in DOE's ability to predict and monitor natural attenuation of chlorinated organics at other sites, increase the acceptability of this solution, and provide significant economic and health benefits through this noninvasive remediation strategy. Finally, this project aims to derive valuable genomic information about the functional attributes of subsurface microbial communities upon which DOE must depend to resolve some of its most difficult contamination issues.

  14. Nonparametric estimation of transition probabilities in the non-Markov illness-death model: A comparative study.

    PubMed

    de Uña-Álvarez, Jacobo; Meira-Machado, Luís

    2015-06-01

    Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed.

  15. Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas

    NASA Astrophysics Data System (ADS)

    England, J. F.; Caldwell, R. J.; Sankovich, V.

    2011-12-01

    Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for

  16. Is expert opinion reliable when estimating transition probabilities? The case of HCV-related cirrhosis in Egypt

    PubMed Central

    2014-01-01

    Background Data on HCV-related cirrhosis progression are scarce in developing countries in general, and in Egypt in particular. The objective of this study was to estimate the probability of death and transition between different health stages of HCV (compensated cirrhosis, decompensated cirrhosis and hepatocellular carcinoma) for an Egyptian population of patients with HCV-related cirrhosis. Methods We used the “elicitation of expert opinions” method to obtain collective knowledge from a panel of 23 Egyptian experts (among whom 17 were hepatologists or gastroenterologists and 2 were infectiologists). The questionnaire was based on virtual medical cases and asked the experts to assess probability of death or probability of various cirrhosis complications. The design was a Delphi study: we attempted to obtain a consensus between experts via a series of questionnaires interspersed with group response feedback. Results We found substantial disparity between experts’ answers, and no consensus was reached at the end of the process. Moreover, we obtained high death probability and high risk of hepatocellular carcinoma. The annual transition probability to death was estimated at between 10.1% and 61.5% and the annual probability of occurrence of hepatocellular carcinoma was estimated at between 16.8% and 58.9% (depending on age, gender, time spent in cirrhosis and cirrhosis severity). Conclusions Our results show that eliciting expert opinions is not suited for determining the natural history of diseases due to practitioners’ difficulties in evaluating quantities. Cognitive bias occurring during this type of study might explain our results. PMID:24635942

  17. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results. PMID:15648621

  18. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers. Third, we have systematically considered the aquifer contaminants at different locations in plumes at other DOE sites in order to determine whether MNA is a broadly applicable remediation strategy for chlorinated hydrocarbons (North Wind Inc.). Realistic terms for co-metabolism of TCE will provide marked improvements in DOE’s ability to predict and

  19. Estimating population size of large laboratory colonies of the Formosan subterranean termite using the capture probability equilibrium.

    PubMed

    Su, Nan-Yao

    2013-12-01

    The reliability of the capture probability equilibrium model developed by Su and Lee (2008) for population estimate was tested in three-directional extended foraging arenas connecting to large Plexiglas cubes (96 by 96 by 96 cm) containing approximately 100,000-400,000 workers of the Formosan subterranean termite, Coptotermes formosanus Shiraki. After the release of marked termites in the arenas, the capture probability was averaged for three directions at equal distance from the release point. The daily data of directionally averaged capture probability were subject to a linear regression with distance as the independent variable to identify the capture probability equilibrium. When the daily data produced significant regressions with regression slope [b] < or = 0.05 or [b] approximately 0.05, the directionally averaged capture probability was considered to have reached equilibrium, and the regression intercept was used in the Lincoln index to derive the population estimate. Of the four laboratory colonies tested, three met the criteria, and the equilibrium models yielded population estimates that were not significantly different from the known numbers of workers in the arenas. PMID:24498746

  20. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F.S.; Crawford, R.L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers.

  1. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    SciTech Connect

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  2. Empirical estimation of the conditional probability of natech events within the United States.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea

    2011-06-01

    Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry. PMID:21231945

  3. Empirical estimation of the conditional probability of natech events within the United States.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea

    2011-06-01

    Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry.

  4. Absolute probability estimates of lethal vessel strikes to North Atlantic right whales in Roseway Basin, Scotian Shelf.

    PubMed

    van der Hoop, Julie M; Vanderlaan, Angelia S M; Taggart, Christopher T

    2012-10-01

    Vessel strikes are the primary source of known mortality for the endangered North Atlantic right whale (Eubalaena glacialis). Multi-institutional efforts to reduce mortality associated with vessel strikes include vessel-routing amendments such as the International Maritime Organization voluntary "area to be avoided" (ATBA) in the Roseway Basin right whale feeding habitat on the southwestern Scotian Shelf. Though relative probabilities of lethal vessel strikes have been estimated and published, absolute probabilities remain unknown. We used a modeling approach to determine the regional effect of the ATBA, by estimating reductions in the expected number of lethal vessel strikes. This analysis differs from others in that it explicitly includes a spatiotemporal analysis of real-time transits of vessels through a population of simulated, swimming right whales. Combining automatic identification system (AIS) vessel navigation data and an observationally based whale movement model allowed us to determine the spatial and temporal intersection of vessels and whales, from which various probability estimates of lethal vessel strikes are derived. We estimate one lethal vessel strike every 0.775-2.07 years prior to ATBA implementation, consistent with and more constrained than previous estimates of every 2-16 years. Following implementation, a lethal vessel strike is expected every 41 years. When whale abundance is held constant across years, we estimate that voluntary vessel compliance with the ATBA results in an 82% reduction in the per capita rate of lethal strikes; very similar to a previously published estimate of 82% reduction in the relative risk of a lethal vessel strike. The models we developed can inform decision-making and policy design, based on their ability to provide absolute, population-corrected, time-varying estimates of lethal vessel strikes, and they are easily transported to other regions and situations.

  5. How does new evidence change our estimates of probabilities? Carnap's formula revisited

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris

    1992-01-01

    The formula originally proposed by R. Carnap in his analysis of induction is reviewed and its natural generalization is presented. A situation is considered where the probability of a certain event is determined without using standard statistical methods due to the lack of observation.

  6. Optimization of next-event estimation probability in Monte Carlo shielding calculations

    SciTech Connect

    Hoffman, T.J.; Tang, J.S.

    1983-01-01

    In Monte Carlo radiation transport calculations with point detectors, the next-event estimation is employed to estimate the response to each detector from all collision sites. The computation time required for this estimation process is substantial and often exceeds the time required to generate and process particle histories in a calculation. This estimation from all collision sites is, therefore, very wasteful in Monte Carlo shielding calculations. For example, in the source region and in regions far away from the detectors, the next-event contribution of a particle is often very small and insignificant. A method for reducing this inefficiency is described. (WHK)

  7. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy.

  8. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  9. Use of portable antennas to estimate abundance of PIT-tagged fish in small streams: Factors affecting detection probability

    USGS Publications Warehouse

    O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.

    2010-01-01

    Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.

  10. Comparing Two Different Methods to Evaluate Convariance-Matrix of Debris Orbit State in Collision Probability Estimation

    NASA Astrophysics Data System (ADS)

    Cheng, Haowen; Liu, Jing; Xu, Yang

    The evaluation of convariance-matrix is an inevitable step when estimating collision probability based on the theory. Generally, there are two different methods to compute convariance-matrix. One is so-called Tracking-Delta-Fitting method, first introduced when estimating the collision probability using TLE catalogue data, in which convariance-matrix is evaluated by fitting series of differences between propagated orbits of formal data and updated orbit data. In the second method, convariance-matrix is evaluated in the process of orbit determination. Both of the methods has there difficulties when introduced in collision probability estimation. In the first method, the value of convariance-matrix is evaluated based only on historical orbit data, ignoring information of latest orbit determination. As a result, the accuracy of the method strongly depends on the stability of convariance-matrix of latest updated orbit. In the second method, the evaluation of convariance-matrix is acceptable when the determined orbit satisfies weighted-least-square estimation, depending on the accuracy of observation error convariance, which is hard to obtain in real application, evaluated by analyzing the residuals of orbit determination in our research. In this paper we provided numerical tests to compare these two methods. A simulation of cataloguing objects in LEO, MEO and GEO regions has been carried out for a time span of 3 months. The influence of orbit maneuver has been included in GEO objects cataloguing simulation. For LEO objects cataloguing, the effect of atmospheric density variation has also been considered. At the end of the paper accuracies of evaluated convariance-matrix and estimated collision probability have been tested and compared.

  11. Estimating the probability distribution of von Mises stress for structures undergoing random excitation. Part 1: Derivation

    SciTech Connect

    Segalman, D.; Reese, G.

    1998-09-01

    The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.

  12. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-07-26

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  13. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  14. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  15. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    NASA Astrophysics Data System (ADS)

    de Gregorio, Sofia; Camarda, Marco

    2016-07-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  16. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  17. Information geometric algorithm for estimating switching probabilities in space-varying HMM.

    PubMed

    Nascimento, Jacinto C; Barão, Miguel; Marques, Jorge S; Lemos, João M

    2014-12-01

    This paper proposes an iterative natural gradient algorithm to perform the optimization of switching probabilities in a space-varying hidden Markov model, in the context of human activity recognition in long-range surveillance. The proposed method is a version of the gradient method, developed under an information geometric viewpoint, where the usual Euclidean metric is replaced by a Riemannian metric on the space of transition probabilities. It is shown that the change in metric provides advantages over more traditional approaches, namely: 1) it turns the original constrained optimization into an unconstrained optimization problem; 2) the optimization behaves asymptotically as a Newton method and yields faster convergence than other methods for the same computational complexity; and 3) the natural gradient vector is an actual contravariant vector on the space of probability distributions for which an interpretation as the steepest descent direction is formally correct. Experiments on synthetic and real-world problems, focused on human activity recognition in long-range surveillance settings, show that the proposed methodology compares favorably with the state-of-the-art algorithms developed for the same purpose.

  18. Realistic loss estimation due to the mirror surfaces in a 10 meters-long high finesse Fabry-Perot filter-cavity.

    PubMed

    Straniero, Nicolas; Degallaix, Jérôme; Flaminio, Raffaele; Pinard, Laurent; Cagnoli, Gianpietro

    2015-08-10

    In order to benefit over the entire frequency range from the injection of squeezed vacuum light at the output of laser gravitational wave detectors, a small bandwidth high finesse cavity is required. In this paper, we investigate the light losses due to the flatness and the roughness of realistic mirrors in a 10 meters-long Fabry-Perot filter cavity. Using measurements of commercial super-polished mirrors, we were able to estimate the cavity round trip losses separating the loss contribution from low and high spatial frequencies. By careful tuning of the cavity g-factor and the incident position of the light on the mirrors, round trip losses due to imperfect mirror surfaces as low as 3 ppm can be achieved in the simulations. PMID:26367993

  19. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  20. Bistatic-radar estimation of surface-slope probability distributions with applications to the moon.

    NASA Technical Reports Server (NTRS)

    Parker, M. N.; Tyler, G. L.

    1973-01-01

    A method for extracting surface-slope frequency distributions from bistatic-radar data has been developed and applied to the lunar surface. Telemetry transmissions from orbiting Apollo spacecraft were received on the earth after reflection from the lunar surface. The echo-frequency spectrum was related analytically to the probability distribution of lunar slopes. Standard regression techniques were used to solve the inverse problem of finding slope distributions from observed echo-frequency spectra. Data taken simultaneously at two wavelengths, 13 and 116 cm, have yielded diverse slope statistics.

  1. Estimated probability of arsenic in groundwater from bedrock aquifers in New Hampshire, 2011

    USGS Publications Warehouse

    Ayotte, Joseph D.; Cahillane, Matthew; Hayes, Laura; Robinson, Keith W.

    2012-01-01

    The statewide maps generated by the probability models are not designed to predict arsenic concentration in any single well, but they are expected to provide useful information in areas of the State that currently contain little to no data on arsenic concentration. They also may aid in resource decision making, in determining potential risk for private wells, and in ecological-level analysis of disease outcomes. The approach for modeling arsenic in groundwater could also be applied to other environmental contaminants that have potential implications for human health, such as uranium, radon, fluoride, manganese, volatile organic compounds, nitrate, and bacteria.

  2. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2016-01-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a statistical methodology is proposed to predict the probability of the presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the logistic regression methodology. It is developed in two forms, logistic regression and locally weighted logistic regression, which both deliver useful and accurate results. The second form, though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use and accurate and can be applied to any region and river.

  3. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2015-06-01

    Riverbank erosion affects river morphology and local habitat and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict vulnerable to erosion areas is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a combined deterministic and statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the vulnerable to erosion locations by quantifying the potential eroded area. The derived results are used to determine validation locations for the statistical tool performance evaluation. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed methodology is easy to use, accurate and can be applied to any region and river.

  4. EVALUATING PROBABILITY SAMPLING STRATEGIES FOR ESTIMATING REDD COUNTS: AN EXAMPLE WITH CHINOOK SALMON (Oncorhynchus tshawytscha)

    EPA Science Inventory

    Precise, unbiased estimates of population size are an essential tool for fisheries management. For a wide variety of salmonid fishes, redd counts from a sample of reaches are commonly used to monitor annual trends in abundance. Using a 9-year time series of georeferenced censuses...

  5. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  6. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  7. Methods for Estimating Kidney Disease Stage Transition Probabilities Using Electronic Medical Records

    PubMed Central

    Luo, Lola; Small, Dylan; Stewart, Walter F.; Roy, Jason A.

    2013-01-01

    Chronic diseases are often described by stages of severity. Clinical decisions about what to do are influenced by the stage, whether a patient is progressing, and the rate of progression. For chronic kidney disease (CKD), relatively little is known about the transition rates between stages. To address this, we used electronic health records (EHR) data on a large primary care population, which should have the advantage of having both sufficient follow-up time and sample size to reliably estimate transition rates for CKD. However, EHR data have some features that threaten the validity of any analysis. In particular, the timing and frequency of laboratory values and clinical measurements are not determined a priori by research investigators, but rather, depend on many factors, including the current health of the patient. We developed an approach for estimating CKD stage transition rates using hidden Markov models (HMMs), when the level of information and observation time vary among individuals. To estimate the HMMs in a computationally manageable way, we used a “discretization” method to transform daily data into intervals of 30 days, 90 days, or 180 days. We assessed the accuracy and computation time of this method via simulation studies. We also used simulations to study the effect of informative observation times on the estimated transition rates. Our simulation results showed good performance of the method, even when missing data are non-ignorable. We applied the methods to EHR data from over 60,000 primary care patients who have chronic kidney disease (stage 2 and above). We estimated transition rates between six underlying disease states. The results were similar for men and women. PMID:25848580

  8. Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.

  9. Probability-based estimates of site-specific copper water quality criteria for the Chesapeake Bay, USA.

    PubMed

    Arnold, W Ray; Warren-Hicks, William J

    2007-01-01

    The object of this study was to estimate site- and region-specific dissolved copper criteria for a large embayment, the Chesapeake Bay, USA. The intent is to show the utility of 2 copper saltwater quality site-specific criteria estimation models and associated region-specific criteria selection methods. The criteria estimation models and selection methods are simple, efficient, and cost-effective tools for resource managers. The methods are proposed as potential substitutes for the US Environmental Protection Agency's water effect ratio methods. Dissolved organic carbon data and the copper criteria models were used to produce probability-based estimates of site-specific copper saltwater quality criteria. Site- and date-specific criteria estimations were made for 88 sites (n = 5,296) in the Chesapeake Bay. The average and range of estimated site-specific chronic dissolved copper criteria for the Chesapeake Bay were 7.5 and 5.3 to 16.9 microg Cu/L. The average and range of estimated site-specific acute dissolved copper criteria for the Chesapeake Bay were 11.7 and 8.3 to 26.4 microg Cu/L. The results suggest that applicable national and state copper criteria can increase in much of the Chesapeake Bay and remain protective. Virginia Department of Environmental Quality copper criteria near the mouth of the Chesapeake Bay, however, need to decrease to protect species of equal or greater sensitivity to that of the marine mussel, Mytilus sp.

  10. Estimation of the probability of exposure to metalworking fluids in a population-based case-control study

    PubMed Central

    Park, Dong-Uk; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Armenti, Karla R.; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe here an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (10-90%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally, 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and the US production levels by decade found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. PMID:25256317

  11. Student Estimates of Probability and Uncertainty in Advanced Laboratory and Statistical Physics Courses

    NASA Astrophysics Data System (ADS)

    Mountcastle, Donald B.; Bucy, Brandon R.; Thompson, John R.

    2007-11-01

    Equilibrium properties of macroscopic systems are highly predictable as n, the number of particles approaches and exceeds Avogadro's number; theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity (ω) (where S = k ln(ω)) include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our statistical mechanics students usually gave reasonable answers about the probabilities, but not the relative uncertainties, of the predicted outcomes of such events. However, they reliably predicted that the uncertainty in a measured continuous quantity (e.g., the amount of rainfall) does decrease as the number of measurements increases. Typical textbook presentations assume that students understand that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. This is at odds with our findings, even though most of our students had previously completed mathematics courses in statistics, as well as an advanced electronics laboratory course that included statistical analysis of distributions of dart scores as n increased.

  12. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  13. The joint probability distribution function of structure factors with rational indices. V. The estimates.

    PubMed

    Giacovazzo; Siliqi; Fernández-Castaño; Comunale

    1999-05-01

    The probabilistic formulae [Giacovazzo, Siliqi & Fernández-Castaño (1999). Acta Cryst. A55, 512-524] relating standard and half-integral index reflections are modified for practical applications. The experimental tests prove the reliability of the probabilistic relationships. The approach is further developed to explore whether the moduli of the half-integral index reflections can be evaluated in the absence of phase information; i.e. by exploiting the moduli of the standard reflections only. The final formulae indicate that estimates can be obtained, even though the reliability factor is a constant.

  14. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    SciTech Connect

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-02-15

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  15. Estimation of the diffusion rate and crossing probability for biased edge movement between two different types of habitat.

    PubMed

    Xiao, Mingqing; Reeve, John D; Xu, Dashun; Cronin, James T

    2013-09-01

    One of the fundamental goals of ecology is to examine how dispersal affects the distribution and dynamics of insects across natural landscapes. These landscapes are frequently divided into patches of habitat embedded in a matrix of several non-habitat regions, and dispersal behavior could vary within each landscape element as well as the edges between elements. Reaction-diffusion models are a common way of modeling dispersal and species interactions in such landscapes, but to apply these models we also need methods of estimating the diffusion rate and any edge behavior parameters. In this paper, we present a method of estimating the diffusion rate using the mean occupancy time for a circular region. We also use mean occupancy time to estimate a parameter (the crossing probability) that governs one type of edge behavior often used in these models, a biased random walk. These new methods have some advantages over other methods of estimating these parameters, including reduced computational cost and ease of use in the field. They also provide a method of estimating the diffusion rate for a particular location in space, compared to existing methods that represent averages over large areas. We further examine the statistical properties of the new method through simulation, and discuss how mean occupancy time could be estimated in field experiments.

  16. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers-Part I

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry

    2008-01-01

    Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).

  17. Estimated probability of postwildfire debris flows in the 2012 Whitewater-Baldy Fire burn area, southwestern New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Matherne, Anne Marie; Verdin, Kristine L.

    2012-01-01

    In May and June 2012, the Whitewater-Baldy Fire burned approximately 1,200 square kilometers (300,000 acres) of the Gila National Forest, in southwestern New Mexico. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from 128 basins burned by the Whitewater-Baldy Fire. A pair of empirical hazard-assessment models developed by using data from recently burned basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and for selected drainage basins within the burned area. The models incorporate measures of areal burned extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. In response to the 2-year-recurrence, 30-minute-duration rainfall, modeling indicated that four basins have high probabilities of debris-flow occurrence (greater than or equal to 80 percent). For the 10-year-recurrence, 30-minute-duration rainfall, an additional 14 basins are included, and for the 25-year-recurrence, 30-minute-duration rainfall, an additional eight basins, 20 percent of the total, have high probabilities of debris-flow occurrence. In addition, probability analysis along the stream segments can identify specific reaches of greatest concern for debris flows within a basin. Basins with a high probability of debris-flow occurrence were concentrated in the west and central parts of the burned area, including tributaries to Whitewater Creek, Mineral Creek, and Willow Creek. Estimated debris-flow volumes ranged from about 3,000-4,000 cubic meters (m3) to greater than 500,000 m3 for all design storms modeled. Drainage basins with estimated volumes greater than 500,000 m3 included tributaries to Whitewater Creek, Willow

  18. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  19. Analysis of a probability-based SATCOM situational awareness model for parameter estimation

    NASA Astrophysics Data System (ADS)

    Martin, Todd W.; Chang, Kuo-Chu; Tian, Xin; Chen, Genshe

    2016-05-01

    Emerging satellite communication (SATCOM) systems are envisioned to incorporate advanced capabilities for dynamically adapting link and network configurations to meet user performance needs. These advanced capabilities require an understanding of the operating environment as well as the potential outcomes of adaptation decisions. A SATCOM situational awareness and decision-making approach is needed that represents the cause and effect linkage of relevant phenomenology and operating conditions on link performance. Similarly, the model must enable a corresponding diagnostic capability that allows SATCOM payload managers to assess likely causes of observed effects. Prior work demonstrated the ability to use a probabilistic reasoning model for a SATCOM situational awareness model. It provided the theoretical basis and demonstrated the ability to realize such a model. This paper presents an analysis of the probabilistic reasoning approach in the context of its ability to be used for diagnostic purposes. A quantitative assessment is presented to demonstrate the impact of uncertainty on estimation accuracy for several key parameters. The paper also discusses how the results could be used by a higher-level reasoning process to evaluate likely causes of performance shortfalls such as atmospheric conditions, pointing errors, and jamming.

  20. An 8-channel neural spike processing IC with unsupervised closed-loop control based on spiking probability estimation.

    PubMed

    Wu, Tong; Yang, Zhi

    2014-01-01

    This paper presents a neural spike processing IC for simultaneous spike detection, alignment, and transmission on 8 recording channels with unsupervised closed-loop control. In this work, spikes are detected according to online estimated spiking probability maps, which reliably predict the possibility of spike occurrence. The closed-loop control has been made possible by estimating firing rates based on alignment results and turning on/off channels individually and automatically. The 8-channel neural spike processing IC, implemented in a 0.13 μm CMOS process, has a varied power dissipation from 36 μW to 54.4 μW per channel at a voltage supply of 1.2 V. The chip also achieves a 380× data rate reduction for the testing in vivo data, allowing easy integration with wireless data transmission modules. PMID:25571180

  1. Developing an Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  2. Inverse problems in cancellous bone: Estimation of the ultrasonic properties of fast and slow waves using Bayesian probability theory

    PubMed Central

    Anderson, Christian C.; Bauer, Adam Q.; Holland, Mark R.; Pakula, Michal; Laugier, Pascal; Bretthorst, G. Larry; Miller, James G.

    2010-01-01

    Quantitative ultrasonic characterization of cancellous bone can be complicated by artifacts introduced by analyzing acquired data consisting of two propagating waves (a fast wave and a slow wave) as if only one wave were present. Recovering the ultrasonic properties of overlapping fast and slow waves could therefore lead to enhancement of bone quality assessment. The current study uses Bayesian probability theory to estimate phase velocity and normalized broadband ultrasonic attenuation (nBUA) parameters in a model of fast and slow wave propagation. Calculations are carried out using Markov chain Monte Carlo with simulated annealing to approximate the marginal posterior probability densities for parameters in the model. The technique is applied to simulated data, to data acquired on two phantoms capable of generating two waves in acquired signals, and to data acquired on a human femur condyle specimen. The models are in good agreement with both the simulated and experimental data, and the values of the estimated ultrasonic parameters fall within expected ranges. PMID:21110589

  3. Spinodal Decomposition for the Cahn-Hilliard Equation in Higher Dimensions.Part I: Probability and Wavelength Estimate

    NASA Astrophysics Data System (ADS)

    Maier-Paape, Stanislaus; Wanner, Thomas

    This paper is the first in a series of two papers addressing the phenomenon of spinodal decomposition for the Cahn-Hilliard equation where , is a bounded domain with sufficiently smooth boundary, and f is cubic-like, for example f(u) =u-u3. We will present the main ideas of our approach and explain in what way our method differs from known results in one space dimension due to Grant [26]. Furthermore, we derive certain probability and wavelength estimates. The probability estimate is needed to understand why in a neighborhood of a homogeneous equilibrium u0≡μ of the Cahn-Hilliard equation, with mass μ in the spinodal region, a strongly unstable manifold has dominating effects. This is demonstrated for the linearized equation, but will be essential for the nonlinear setting in the second paper [37] as well. Moreover, we introduce the notion of a characteristic wavelength for the strongly unstable directions.

  4. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  5. Modeling the relationship between most probable number (MPN) and colony-forming unit (CFU) estimates of fecal coliform concentration.

    PubMed

    Gronewold, Andrew D; Wolpert, Robert L

    2008-07-01

    Most probable number (MPN) and colony-forming-unit (CFU) estimates of fecal coliform bacteria concentration are common measures of water quality in coastal shellfish harvesting and recreational waters. Estimating procedures for MPN and CFU have intrinsic variability and are subject to additional uncertainty arising from minor variations in experimental protocol. It has been observed empirically that the standard multiple-tube fermentation (MTF) decimal dilution analysis MPN procedure is more variable than the membrane filtration CFU procedure, and that MTF-derived MPN estimates are somewhat higher on average than CFU estimates, on split samples from the same water bodies. We construct a probabilistic model that provides a clear theoretical explanation for the variability in, and discrepancy between, MPN and CFU measurements. We then compare our model to water quality samples analyzed using both MPN and CFU procedures, and find that the (often large) observed differences between MPN and CFU values for the same water body are well within the ranges predicted by our probabilistic model. Our results indicate that MPN and CFU intra-sample variability does not stem from human error or laboratory procedure variability, but is instead a simple consequence of the probabilistic basis for calculating the MPN. These results demonstrate how probabilistic models can be used to compare samples from different analytical procedures, and to determine whether transitions from one procedure to another are likely to cause a change in quality-based management decisions.

  6. Bayesian Estimates of Transition Probabilities in Seven Small Lithophytic Orchid Populations: Maximizing Data Availability from Many Small Samples

    PubMed Central

    Tremblay, Raymond L.; McCarthy, Michael A.

    2014-01-01

    Predicting population dynamics for rare species is of paramount importance in order to evaluate the likelihood of extinction and planning conservation strategies. However, evaluating and predicting population viability can be hindered from a lack of data. Rare species frequently have small populations, so estimates of vital rates are often very uncertain due to lack of data. We evaluated the vital rates of seven small populations from two watersheds with varying light environment of a common epiphytic orchid using Bayesian methods of parameter estimation. From the Lefkovitch matrices we predicted the deterministic population growth rates, elasticities, stable stage distributions and the credible intervals of the statistics. Populations were surveyed on a monthly basis between 18–34 months. In some of the populations few or no transitions in some of the vital rates were observed throughout the sampling period, however, we were able to predict the most likely vital rates using a Bayesian model that incorporated the transitions rates from the other populations. Asymptotic population growth rate varied among the seven orchid populations. There was little difference in population growth rate among watersheds even though it was expected because of physical differences as a result of differing canopy cover and watershed width. Elasticity analyses of Lepanthes rupestris suggest that growth rate is more sensitive to survival followed by growth, shrinking and the reproductive rates. The Bayesian approach helped to estimate transition probabilities that were uncommon or variable in some populations. Moreover, it increased the precision of the parameter estimates as compared to traditional approaches. PMID:25068598

  7. Estimation of the probability of exposure to machining fluids in a population-based case-control study.

    PubMed

    Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who

  8. Estimation of the probability of exposure to machining fluids in a population-based case-control study.

    PubMed

    Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who

  9. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  10. Probability Theory

    NASA Astrophysics Data System (ADS)

    Jaynes, E. T.; Bretthorst, G. Larry

    2003-04-01

    Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.

  11. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  12. Potential confounds in estimating trial-to-trial correlations between neuronal response and behavior using choice probabilities

    PubMed Central

    Maunsell, John H. R.

    2012-01-01

    Correlations between trial-to-trial fluctuations in the responses of individual sensory neurons and perceptual reports, commonly quantified with choice probability (CP), have been widely used as an important tool for assessing the contributions of neurons to behavior. These correlations are usually weak and often require a large number of trials for a reliable estimate. Therefore, working with measures such as CP warrants care in data analysis as well as rigorous controls during data collection. Here we identify potential confounds that can arise in data analysis and lead to biased estimates of CP, and suggest methods to avoid the bias. In particular, we show that the common practice of combining neuronal responses across different stimulus conditions with z-score normalization can result in an underestimation of CP when the ratio of the numbers of trials for the two behavioral response categories differs across the stimulus conditions. We also discuss the effects of using variable time intervals for quantifying neuronal response on CP measurements. Finally, we demonstrate that serious artifacts can arise in reaction time tasks that use varying measurement intervals if the mean neuronal response and mean behavioral performance vary over time within trials. To emphasize the importance of addressing these concerns in neurophysiological data, we present a set of data collected from V1 cells in macaque monkeys while the animals performed a detection task. PMID:22993262

  13. CFD modelling of most probable bubble nucleation rate from binary mixture with estimation of components' mole fraction in critical cluster

    NASA Astrophysics Data System (ADS)

    Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd

    2016-05-01

    The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.

  14. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  15. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-08-04

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  16. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal

  17. Analysis of altered gait cycle duration in amyotrophic lateral sclerosis based on nonparametric probability density function estimation.

    PubMed

    Wu, Yunfeng; Shi, Lei

    2011-04-01

    Human locomotion is regulated by the central nervous system (CNS). The neurophysiological changes in the CNS due to amyotrophic lateral sclerosis (ALS) may cause altered gait cycle duration (stride interval) or other gait rhythm. This article used a statistical method to analyze the altered stride interval in patients with ALS. We first estimated the probability density functions (PDFs) of stride interval from the outlier-processed gait rhythm time series, by using the nonparametric Parzen-window approach. Based on the PDFs estimated, the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) can be computed to serve as dominant features. In the classification experiments, the least squares support vector machine (LS-SVM) with Gaussian kernels was applied to distinguish the stride patterns in ALS patients. According to the results obtained with the stride interval time series recorded from 16 healthy control subjects and 13 patients with ALS, the key findings of the present study are summarized as follows. (1) It is observed that the mean of stride interval computed based on the PDF for the left foot is correlated with that for the right foot in patients with ALS. (2) The MKLD parameter of the gait in ALS is significantly different from that in healthy controls. (3) The diagnostic performance of the nonlinear LS-SVM, evaluated by the leave-one-out cross-validation method, is superior to that obtained by the linear discriminant analysis. The LS-SVM can effectively separate the stride patterns between the groups of healthy controls and ALS patients with an overall accurate rate of 82.8% and an area of 0.869 under the receiver operating characteristic curve. PMID:21130016

  18. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    PubMed

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities.

  19. Analysis of altered gait cycle duration in amyotrophic lateral sclerosis based on nonparametric probability density function estimation.

    PubMed

    Wu, Yunfeng; Shi, Lei

    2011-04-01

    Human locomotion is regulated by the central nervous system (CNS). The neurophysiological changes in the CNS due to amyotrophic lateral sclerosis (ALS) may cause altered gait cycle duration (stride interval) or other gait rhythm. This article used a statistical method to analyze the altered stride interval in patients with ALS. We first estimated the probability density functions (PDFs) of stride interval from the outlier-processed gait rhythm time series, by using the nonparametric Parzen-window approach. Based on the PDFs estimated, the mean of the left-foot stride interval and the modified Kullback-Leibler divergence (MKLD) can be computed to serve as dominant features. In the classification experiments, the least squares support vector machine (LS-SVM) with Gaussian kernels was applied to distinguish the stride patterns in ALS patients. According to the results obtained with the stride interval time series recorded from 16 healthy control subjects and 13 patients with ALS, the key findings of the present study are summarized as follows. (1) It is observed that the mean of stride interval computed based on the PDF for the left foot is correlated with that for the right foot in patients with ALS. (2) The MKLD parameter of the gait in ALS is significantly different from that in healthy controls. (3) The diagnostic performance of the nonlinear LS-SVM, evaluated by the leave-one-out cross-validation method, is superior to that obtained by the linear discriminant analysis. The LS-SVM can effectively separate the stride patterns between the groups of healthy controls and ALS patients with an overall accurate rate of 82.8% and an area of 0.869 under the receiver operating characteristic curve.

  20. Estimating debris-flow probability using fan stratigraphy, historic records, and drainage-basin morphology, Interstate 70 highway corridor, central Colorado, U.S.A

    USGS Publications Warehouse

    Coe, J.A.; Godt, J.W.; Parise, M.; Moscariello, A.

    2003-01-01

    We have used stratigraphic and historic records of debris-flows to estimate mean recurrence intervals of past debris-flow events on 19 fans along the Interstate 70 highway corridor in the Front Range of Colorado. Estimated mean recurrence intervals were used in the Poisson probability model to estimate the probability of future debris-flow events on the fans. Mean recurrence intervals range from 7 to about 2900 years. Annual probabilities range from less than 0.1% to about 13%. A regression analysis of mean recurrence interval data and drainage-basin morphometry yields a regression model that may be suitable to estimate mean recurrence intervals on fans with no stratigraphic or historic records. Additional work is needed to verify this model. ?? 2003 Millpress.

  1. Developing a Methodology for Eliciting Subjective Probability Estimates During Expert Evaluations of Safety Interventions: Application for Bayesian Belief Networks

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.a

    2005-01-01

    The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.

  2. Converged three-dimensional quantum mechanical reaction probabilities for the F + H2 reaction on a potential energy surface with realistic entrance and exit channels and comparisons to results for three other surfaces

    NASA Technical Reports Server (NTRS)

    Lynch, Gillian C.; Halvick, Philippe; Zhao, Meishan; Truhlar, Donald G.; Yu, Chin-Hui; Kouri, Donald J.; Schwenke, David W.

    1991-01-01

    Accurate three-dimensional quantum mechanical reaction probabilities are presented for the reaction F + H2 yields HF + H on the new global potential energy surface 5SEC for total angular momentum J = 0 over a range of translational energies from 0.15 to 4.6 kcal/mol. It is found that the v-prime = 3 HF vibrational product state has a threshold as low as for v-prime = 2.

  3. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach<probability obtained with the gradient stochastic approach≤probability predicted by Davis and Stoll < probability predicted by Martin et al. The differences are explained by the positive bias of the Martin equation and the lower average resolution observed for the isocratic simulations compared to the gradient simulations with the same peak capacity. When the stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, <1.5% error for saturation factors <0.20. Additional applications for the stochastic approach include isothermal and programmed-temperature gas chromatography.

  4. Estimating present day extreme water level exceedance probabilities around the coastline of Australia: tides, extra-tropical storm surges and mean sea level

    NASA Astrophysics Data System (ADS)

    Haigh, Ivan D.; Wijeratne, E. M. S.; MacPherson, Leigh R.; Pattiaratchi, Charitha B.; Mason, Matthew S.; Crompton, Ryan P.; George, Steve

    2014-01-01

    The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical

  5. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  6. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  7. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  8. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies.

    PubMed

    Austin, Peter C; Stuart, Elizabeth A

    2015-12-10

    The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data. PMID:26238958

  9. Hate Crimes and Stigma-Related Experiences among Sexual Minority Adults in the United States: Prevalence Estimates from a National Probability Sample

    ERIC Educational Resources Information Center

    Herek, Gregory M.

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or…

  10. A comparison of conventional capture versus PIT reader techniques for estimating survival and capture probabilities of big brown bats (Eptesicus fuscus)

    USGS Publications Warehouse

    Ellison, L.E.; O'Shea, T.J.; Neubaum, D.J.; Neubaum, M.A.; Pearce, R.D.; Bowen, R.A.

    2007-01-01

    We compared conventional capture (primarily mist nets and harp traps) and passive integrated transponder (PIT) tagging techniques for estimating capture and survival probabilities of big brown bats (Eptesicus fuscus) roosting in buildings in Fort Collins, Colorado. A total of 987 female adult and juvenile bats were captured and marked by subdermal injection of PIT tags during the summers of 2001-2005 at five maternity colonies in buildings. Openings to roosts were equipped with PIT hoop-style readers, and exit and entry of bats were passively monitored on a daily basis throughout the summers of 2002-2005. PIT readers 'recaptured' adult and juvenile females more often than conventional capture events at each roost. Estimates of annual capture probabilities for all five colonies were on average twice as high when estimated from PIT reader data (P?? = 0.93-1.00) than when derived from conventional techniques (P?? = 0.26-0.66), and as a consequence annual survival estimates were more precisely estimated when using PIT reader encounters. Short-term, daily capture estimates were also higher using PIT readers than conventional captures. We discuss the advantages and limitations of using PIT tags and passive encounters with hoop readers vs. conventional capture techniques for estimating these vital parameters in big brown bats. ?? Museum and Institute of Zoology PAS.

  11. Estimation of the age-specific per-contact probability of Ebola virus transmission in Liberia using agent-based simulations

    NASA Astrophysics Data System (ADS)

    Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios

    2016-06-01

    Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.

  12. What makes a message real? The effects of perceived realism of alcohol- and drug-related messages on personal probability estimation.

    PubMed

    Cho, Hyunyi; Shen, Lijiang; Wilson, Kari M

    2013-03-01

    Perceived lack of realism in alcohol advertising messages promising positive outcomes and antialcohol and antidrug messages portraying negative outcomes of alcohol consumption has been a cause for public health concern. This study examined the effects of perceived realism dimensions on personal probability estimation through identification and message minimization. Data collected from college students in U.S. Midwest in 2010 (N = 315) were analyzed with multilevel structural equation modeling. Plausibility and narrative consistency mitigated message minimization, but they did not influence identification. Factuality and perceptual quality influenced both message minimization and identification, but their effects were smaller than those of typicality. Typicality was the strongest predictor of probability estimation. Implications of the results and suggestions for future research are provided.

  13. Should Coulomb stress change calculations be used to forecast aftershocks and to influence earthquake probability estimates? (Invited)

    NASA Astrophysics Data System (ADS)

    Parsons, T.

    2009-12-01

    After a large earthquake, our concern immediately moves to the likelihood that another large shock could be triggered, threatening an already weakened building stock. A key question is whether it is best to map out Coulomb stress change calculations shortly after mainshocks to potentially highlight the most likely aftershock locations, or whether it is more prudent to wait until the best information is available. It has been shown repeatedly that spatial aftershock patterns can be matched with Coulomb stress change calculations a year or more after mainshocks. However, with the onset of rapid source slip model determinations, the method has produced encouraging results like the M=8.7 earthquake that was forecast using stress change calculations from 2004 great Sumatra earthquake by McCloskey et al. [2005]. Here, I look back at two additional prospective calculations published shortly after the 2005 M=7.6 Kashmir and 2008 M=8.0 Wenchuan earthquakes. With the benefit of 1.5-4 years of additional seismicity, it is possible to assess the performance of rapid Coulomb stress change calculations. In the second part of the talk, within the context of the ongoing Working Group on California Earthquake Probabilities (WGCEP) assessments, uncertainties associated with time-dependent probability calculations are convolved with uncertainties inherent to Coulomb stress change calculations to assess the strength of signal necessary for a physics-based calculation to merit consideration into a formal earthquake forecast. Conclusions are as follows: (1) subsequent aftershock occurrence shows that prospective static stress change calculations both for Kashmir and Wenchuan examples failed to adequately predict the spatial post-mainshock earthquake distributions. (2) For a San Andreas fault example with relatively well-understood recurrence, a static stress change on the order of 30 to 40 times the annual stressing rate would be required to cause a significant (90%) perturbation to the

  14. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  15. Estimation of the detection probability for Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) with a passive acoustic method.

    PubMed

    Akamatsu, T; Wang, D; Wang, K; Li, S; Dong, S; Zhao, X; Barlow, J; Stewart, B S; Richlen, M

    2008-06-01

    Yangtze finless porpoises were surveyed by using simultaneous visual and acoustical methods from 6 November to 13 December 2006. Two research vessels towed stereo acoustic data loggers, which were used to store the intensity and sound source direction of the high frequency sonar signals produced by finless porpoises at detection ranges up to 300 m on each side of the vessel. Simple stereo beam forming allowed the separation of distinct biosonar sound source, which enabled us to count the number of vocalizing porpoises. Acoustically, 204 porpoises were detected from one vessel and 199 from the other vessel in the same section of the Yangtze River. Visually, 163 and 162 porpoises were detected from two vessels within 300 m of the vessel track. The calculated detection probability using acoustic method was approximately twice that for visual detection for each vessel. The difference in detection probabilities between the two methods was caused by the large number of single individuals that were missed by visual observers. However, the sizes of large groups were underestimated by using the acoustic methods. Acoustic and visual observations complemented each other in the accurate detection of porpoises. The use of simple, relatively inexpensive acoustic monitoring systems should enhance population surveys of free-ranging, echolocating odontocetes.

  16. Efficiency of using correlation function for estimation of probability of substance detection on the base of THz spectral dynamics

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.

    2012-10-01

    One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.

  17. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for

  18. Positive random variables with a discrete probability mass at the origin: Parameter estimation for left-censored samples with application to air quality monitoring data

    SciTech Connect

    Gogolak, C.V.

    1986-11-01

    The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored /sup 85/Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities.

  19. Estimation of phase signal change in neuronal current MRI for evoke response of tactile detection with realistic somatosensory laminar network model.

    PubMed

    BagheriMofidi, Seyed Mehdi; Pouladian, Majid; Jameie, Seyed Behnamedin; Abbaspour Tehrani-Fard, Ali

    2016-09-01

    Magnetic field generated by neuronal activity could alter magnetic resonance imaging (MRI) signals but detection of such signal is under debate. Previous researches proposed that magnitude signal change is below current detectable level, but phase signal change (PSC) may be measurable with current MRI systems. Optimal imaging parameters like echo time, voxel size and external field direction, could increase the probability of detection of this small signal change. We simulate a voxel of cortical column to determine effect of such parameters on PSC signal. We extended a laminar network model for somatosensory cortex to find neuronal current in each segment of pyramidal neurons (PN). 60,000 PNs of simulated network were positioned randomly in a voxel. Biot-savart law applied to calculate neuronal magnetic field and additional phase. The procedure repeated for eleven neuronal arrangements in the voxel. PSC signal variation with the echo time and voxel size was assessed. The simulated results show that PSC signal increases with echo time, especially 100/80 ms after stimulus for gradient echo/spin echo sequence. It can be up to 0.1 mrad for echo time = 175 ms and voxel size = 1.48 × 1.48 × 2.18 mm(3). With echo time less than 25 ms after stimulus, it was just acquired effects of physiological noise on PSC signal. The absolute value of the signal increased with decrease of voxel size, but its components had complex variation. External field orthogonal to local surface of cortex maximizes the signal. Expected PSC signal for tactile detection in the somatosensory cortex increase with echo time and have no oscillation.

  20. Estimation of phase signal change in neuronal current MRI for evoke response of tactile detection with realistic somatosensory laminar network model.

    PubMed

    BagheriMofidi, Seyed Mehdi; Pouladian, Majid; Jameie, Seyed Behnamedin; Abbaspour Tehrani-Fard, Ali

    2016-09-01

    Magnetic field generated by neuronal activity could alter magnetic resonance imaging (MRI) signals but detection of such signal is under debate. Previous researches proposed that magnitude signal change is below current detectable level, but phase signal change (PSC) may be measurable with current MRI systems. Optimal imaging parameters like echo time, voxel size and external field direction, could increase the probability of detection of this small signal change. We simulate a voxel of cortical column to determine effect of such parameters on PSC signal. We extended a laminar network model for somatosensory cortex to find neuronal current in each segment of pyramidal neurons (PN). 60,000 PNs of simulated network were positioned randomly in a voxel. Biot-savart law applied to calculate neuronal magnetic field and additional phase. The procedure repeated for eleven neuronal arrangements in the voxel. PSC signal variation with the echo time and voxel size was assessed. The simulated results show that PSC signal increases with echo time, especially 100/80 ms after stimulus for gradient echo/spin echo sequence. It can be up to 0.1 mrad for echo time = 175 ms and voxel size = 1.48 × 1.48 × 2.18 mm(3). With echo time less than 25 ms after stimulus, it was just acquired effects of physiological noise on PSC signal. The absolute value of the signal increased with decrease of voxel size, but its components had complex variation. External field orthogonal to local surface of cortex maximizes the signal. Expected PSC signal for tactile detection in the somatosensory cortex increase with echo time and have no oscillation. PMID:27585451

  1. Symmetry, probability, and recognition in face space.

    PubMed

    Sirovich, Lawrence; Meytlis, Marsha

    2009-04-28

    The essential midline symmetry of human faces is shown to play a key role in facial coding and recognition. This also has deep and important connections with recent explorations of the organization of primate cortex, as well as human psychophysical experiments. Evidence is presented that the dimension of face recognition space for human faces is dramatically lower than previous estimates. One result of the present development is the construction of a probability distribution in face space that produces an interesting and realistic range of (synthetic) faces. Another is a recognition algorithm that by reasonable criteria is nearly 100% accurate.

  2. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  3. Estimation of the Probability of Radiation Failures and Single Particle Upsets of Integrated Microcircuits onboard the Fobos-Grunt Spacecraft

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. V.; Popov, V. D.; Khamidullina, N. M.

    2005-05-01

    When designing the radio-electronic equipment for long-term operation in a space environment, one of the most important problems is a correct estimation of radiation stability of its electric and radio components (ERC) against radiation-stimulated doze failures and one-particle effects (upsets). These problems are solved in this paper for the integrated microcircuits (IMC) of various types that are to be installed onboard the Fobos-Grunt spacecraft designed at the Federal State Unitary Enterprise “Lavochkin Research and Production Association.” The launching of this spacecraft is planned for 2009.

  4. Aftershocks Prediction In Italy: Estimation of Time-magnitude Distribution Model Parameters and Computation of Probabilities of Occurrence.

    NASA Astrophysics Data System (ADS)

    Lolli, B.; Gasperini, P.

    We analyzed the available instrumental catalogs of Italian earthquakes from 1960 to 1996 to compute the parameters of the time-magnitude distribution model proposed by Reasenberg e Jones (1989, 1994) and currently used to make aftershock predictions in California. We found that empirical corrections ranging from 0.3 (before 1976) to 0.5 magnitude units (between 1976 and 1980) are necessary to make the dataset ho- mogeneous over the entire period. The estimated model parameters result quite stable with respect to mainshock magnitude and sequence detection algorithm, while their spatial variations suggest that regional estimates might predict the behavior of future sequences better than ones computed by the whole Italian dataset. In order to improve the goodness of fit for sequences including multiple mainshocks (like the one occurred in Central Italy from September 1997 to May 1998) we developed a quasi epidemic model (QETAS) consisting of the superposition of a small number of Omori's pro- cesses originated by strong aftershocks. We found that the inclusion in the QETAS model of the shocks with magnitude larger than mainshock magnitude minus one (that are usually located and sized in near real-time by the observatories) improves significantly the ability of the algorithm to predict the sequence behaviors.

  5. A Review of Mycotoxins in Food and Feed Products in Portugal and Estimation of Probable Daily Intakes.

    PubMed

    Abrunhosa, Luís; Morales, Héctor; Soares, Célia; Calado, Thalita; Vila-Chã, Ana Sofia; Pereira, Martinha; Venâncio, Armando

    2016-01-01

    Mycotoxins are toxic secondary metabolites produced by filamentous fungi that occur naturally in agricultural commodities worldwide. Aflatoxins, ochratoxin A, patulin, fumonisins, zearalenone, trichothecenes, and ergot alkaloids are presently the most important for food and feed safety. These compounds are produced by several species that belong to the Aspergillus, Penicillium, Fusarium, and Claviceps genera and can be carcinogenic, mutagenic, teratogenic, cytotoxic, neurotoxic, nephrotoxic, estrogenic, and immunosuppressant. Human and animal exposure to mycotoxins is generally assessed by taking into account data on the occurrence of mycotoxins in food and feed as well as data on the consumption patterns of the concerned population. This evaluation is crucial to support measures to reduce consumer exposure to mycotoxins. This work reviews the occurrence and levels of mycotoxins in Portuguese food and feed to provide a global overview of this issue in Portugal. With the information collected, the exposure of the Portuguese population to those mycotoxins is assessed, and the estimated dietary intakes are presented.

  6. Induced Probabilities.

    ERIC Educational Resources Information Center

    Neel, John H.

    Induced probabilities have been largely ignored by educational researchers. Simply stated, if a new or random variable is defined in terms of a first random variable, then induced probability is the probability or density of the new random variable that can be found by summation or integration over the appropriate domains of the original random…

  7. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has "hung-up." That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down post studs experiencing a "hang-up." The results af loads analyses performed for four (4) stud-hang ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  8. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-Down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has hung up. That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down experiencing a "hang-up". The results of loads analyses performed for (4) stud hang-ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  9. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  10. Random sampler M-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks.

    PubMed

    El-Melegy, Moumen T

    2013-07-01

    This paper addresses the problem of fitting a functional model to data corrupted with outliers using a multilayered feed-forward neural network. Although it is of high importance in practical applications, this problem has not received careful attention from the neural network research community. One recent approach to solving this problem is to use a neural network training algorithm based on the random sample consensus (RANSAC) framework. This paper proposes a new algorithm that offers two enhancements over the original RANSAC algorithm. The first one improves the algorithm accuracy and robustness by employing an M-estimator cost function to decide on the best estimated model from the randomly selected samples. The other one improves the time performance of the algorithm by utilizing a statistical pretest based on Wald's sequential probability ratio test. The proposed algorithm is successfully evaluated on synthetic and real data, contaminated with varying degrees of outliers, and compared with existing neural network training algorithms.

  11. Estimating the probability distribution of the incubation period for rabies using data from the 1948-1954 rabies epidemic in Tokyo.

    PubMed

    Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K

    2016-01-01

    Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs.

  12. Estimation of markov chain transition probabilities and rates from fully and partially observed data: uncertainty propagation, evidence synthesis, and model calibration.

    PubMed

    Welton, Nicky J; Ades, A E

    2005-01-01

    Markov transition models are frequently used to model disease progression. The authors show how the solution to Kolmogorov's forward equations can be exploited to map between transition rates and probabilities from probability data in multistate models. They provide a uniform, Bayesian treatment of estimation and propagation of uncertainty of transition rates and probabilities when 1) observations are available on all transitions and exact time at risk in each state (fully observed data) and 2) observations are on initial state and final state after a fixed interval of time but not on the sequence of transitions (partially observed data). The authors show how underlying transition rates can be recovered from partially observed data using Markov chain Monte Carlo methods in WinBUGS, and they suggest diagnostics to investigate inconsistencies between evidence from different starting states. An illustrative example for a 3-state model is given, which shows how the methods extend to more complex Markov models using the software WBDiff to compute solutions. Finally, the authors illustrate how to statistically combine data from multiple sources, including partially observed data at several follow-up times and also how to calibrate a Markov model to be consistent with data from one specific study. PMID:16282214

  13. Multi-scale occupancy approach to estimate Toxoplasma gondii prevalence and detection probability in tissues: an application and guide for field sampling.

    PubMed

    Elmore, Stacey A; Huyvaert, Kathryn P; Bailey, Larissa L; Iqbal, Asma; Su, Chunlei; Dixon, Brent R; Alisauskas, Ray T; Gajadhar, Alvin A; Jenkins, Emily J

    2016-08-01

    Increasingly, birds are recognised as important hosts for the ubiquitous parasite Toxoplasma gondii, although little experimental evidence exists to determine which tissues should be tested to maximise the detection probability of T. gondii. Also, Arctic-nesting geese are suspected to be important sources of T. gondii in terrestrial Arctic ecosystems, but the parasite has not previously been reported in the tissues of these geese. Using a domestic goose model, we applied a multi-scale occupancy framework to demonstrate that the probability of detection of T. gondii was highest in the brain (0.689, 95% confidence interval=0.486, 0.839) and the heart (0.809, 95% confidence interval=0.693, 0.888). Inoculated geese had an estimated T. gondii infection probability of 0.849, (95% confidence interval=0.643, 0.946), highlighting uncertainty in the system, even under experimental conditions. Guided by these results, we tested the brains and hearts of wild Ross's Geese (Chen rossii, n=50) and Lesser Snow Geese (Chen caerulescens, n=50) from Karrak Lake, Nunavut, Canada. We detected 51 suspected positive tissue samples from 33 wild geese using real-time PCR with melt-curve analysis. The wild goose prevalence estimates generated by our multi-scale occupancy analysis were higher than the naïve estimates of prevalence, indicating that multiple PCR repetitions on the same organs and testing more than one organ could improve T. gondii detection. Genetic characterisation revealed Type III T. gondii alleles in six wild geese and Sarcocystis spp. in 25 samples. Our study demonstrates that Arctic nesting geese are capable of harbouring T. gondii in their tissues and could transport the parasite from their southern overwintering grounds into the Arctic region. We demonstrate how a multi-scale occupancy framework can be used in a domestic animal model to guide resource-limited sample collection and tissue analysis in wildlife. Secondly, we confirm the value of traditional occupancy in

  14. Multi-scale occupancy approach to estimate Toxoplasma gondii prevalence and detection probability in tissues: an application and guide for field sampling.

    PubMed

    Elmore, Stacey A; Huyvaert, Kathryn P; Bailey, Larissa L; Iqbal, Asma; Su, Chunlei; Dixon, Brent R; Alisauskas, Ray T; Gajadhar, Alvin A; Jenkins, Emily J

    2016-08-01

    Increasingly, birds are recognised as important hosts for the ubiquitous parasite Toxoplasma gondii, although little experimental evidence exists to determine which tissues should be tested to maximise the detection probability of T. gondii. Also, Arctic-nesting geese are suspected to be important sources of T. gondii in terrestrial Arctic ecosystems, but the parasite has not previously been reported in the tissues of these geese. Using a domestic goose model, we applied a multi-scale occupancy framework to demonstrate that the probability of detection of T. gondii was highest in the brain (0.689, 95% confidence interval=0.486, 0.839) and the heart (0.809, 95% confidence interval=0.693, 0.888). Inoculated geese had an estimated T. gondii infection probability of 0.849, (95% confidence interval=0.643, 0.946), highlighting uncertainty in the system, even under experimental conditions. Guided by these results, we tested the brains and hearts of wild Ross's Geese (Chen rossii, n=50) and Lesser Snow Geese (Chen caerulescens, n=50) from Karrak Lake, Nunavut, Canada. We detected 51 suspected positive tissue samples from 33 wild geese using real-time PCR with melt-curve analysis. The wild goose prevalence estimates generated by our multi-scale occupancy analysis were higher than the naïve estimates of prevalence, indicating that multiple PCR repetitions on the same organs and testing more than one organ could improve T. gondii detection. Genetic characterisation revealed Type III T. gondii alleles in six wild geese and Sarcocystis spp. in 25 samples. Our study demonstrates that Arctic nesting geese are capable of harbouring T. gondii in their tissues and could transport the parasite from their southern overwintering grounds into the Arctic region. We demonstrate how a multi-scale occupancy framework can be used in a domestic animal model to guide resource-limited sample collection and tissue analysis in wildlife. Secondly, we confirm the value of traditional occupancy in

  15. Using probability-based spatial estimation of the river pollution index to assess urban water recreational quality in the Tamsui River watershed.

    PubMed

    Jang, Cheng-Shin

    2016-01-01

    The Tamsui River watershed situated in Northern Taiwan provides a variety of water recreational opportunities such as riverbank park activities, fishing, cruising, rowing, sailing, and swimming. However, river water quality strongly affects water recreational quality. Moreover, the health of recreationists who are partially or fully exposed to polluted river water may be jeopardized. A river pollution index (RPI) composed of dissolved oxygen, biochemical oxygen demand, suspended solids, and ammonia nitrogen is typically used to gauge the river water quality and regulate the water body use in Taiwan. The purpose of this study was to probabilistically determine the RPI categories in the Tamsui River watershed and to assess the urban water recreational quality on the basis of the estimated RPI categories. First, according to various RPI categories, one-dimensional indicator kriging (IK) was adopted to estimate the occurrence probabilities of the RPI categories. The maximum occurrence probability among the categories was then employed to determine the most suitable RPI category. Finally, the most serious categories and seasonal variations of RPI were adopted to evaluate the quality of current water recreational opportunities in the Tamsui River watershed. The results revealed that the midstream and downstream sections of the Tamsui River and its tributaries with poor river water quality afford low water recreational quality, and water recreationists should avoid full or limited exposure to these bodies of water. However, the upstream sections of the Tamsui River watershed with high river water quality are suitable for all water recreational activities.

  16. Estimating reach-specific fish movement probabilities in rivers with a Bayesian state-space model: application to sea lamprey passage and capture at dams

    USGS Publications Warehouse

    Holbrook, Christopher M.; Johnson, Nicholas S.; Steibel, Juan P.; Twohey, Michael B.; Binder, Thomas R.; Krueger, Charles C.; Jones, Michael L.

    2014-01-01

    Improved methods are needed to evaluate barriers and traps for control and assessment of invasive sea lamprey (Petromyzon marinus) in the Great Lakes. A Bayesian state-space model provided reach-specific probabilities of movement, including trap capture and dam passage, for 148 acoustic tagged invasive sea lamprey in the lower Cheboygan River, Michigan, a tributary to Lake Huron. Reach-specific movement probabilities were combined to obtain estimates of spatial distribution and abundance needed to evaluate a barrier and trap complex for sea lamprey control and assessment. Of an estimated 21 828 – 29 300 adult sea lampreys in the river, 0%–2%, or 0–514 untagged lampreys, could have passed upstream of the dam, and 46%–61% were caught in the trap. Although no tagged lampreys passed above the dam (0/148), our sample size was not sufficient to consider the lock and dam a complete barrier to sea lamprey. Results also showed that existing traps are in good locations because 83%–96% of the population was vulnerable to existing traps. However, only 52%–69% of lampreys vulnerable to traps were caught, suggesting that traps can be improved. The approach used in this study was a novel use of Bayesian state-space models that may have broader applications, including evaluation of barriers for other invasive species (e.g., Asian carp (Hypophthalmichthys spp.)) and fish passage structures for other diadromous fishes.

  17. Using probability-based spatial estimation of the river pollution index to assess urban water recreational quality in the Tamsui River watershed.

    PubMed

    Jang, Cheng-Shin

    2016-01-01

    The Tamsui River watershed situated in Northern Taiwan provides a variety of water recreational opportunities such as riverbank park activities, fishing, cruising, rowing, sailing, and swimming. However, river water quality strongly affects water recreational quality. Moreover, the health of recreationists who are partially or fully exposed to polluted river water may be jeopardized. A river pollution index (RPI) composed of dissolved oxygen, biochemical oxygen demand, suspended solids, and ammonia nitrogen is typically used to gauge the river water quality and regulate the water body use in Taiwan. The purpose of this study was to probabilistically determine the RPI categories in the Tamsui River watershed and to assess the urban water recreational quality on the basis of the estimated RPI categories. First, according to various RPI categories, one-dimensional indicator kriging (IK) was adopted to estimate the occurrence probabilities of the RPI categories. The maximum occurrence probability among the categories was then employed to determine the most suitable RPI category. Finally, the most serious categories and seasonal variations of RPI were adopted to evaluate the quality of current water recreational opportunities in the Tamsui River watershed. The results revealed that the midstream and downstream sections of the Tamsui River and its tributaries with poor river water quality afford low water recreational quality, and water recreationists should avoid full or limited exposure to these bodies of water. However, the upstream sections of the Tamsui River watershed with high river water quality are suitable for all water recreational activities. PMID:26676412

  18. A technique for estimating the probability of radiation-stimulated failures of integrated microcircuits in low-intensity radiation fields: Application to the Spektr-R spacecraft

    NASA Astrophysics Data System (ADS)

    Popov, V. D.; Khamidullina, N. M.

    2006-10-01

    In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.

  19. Estimating the Transitional Probabilities of Smoking Stages with Cross-sectional Data and 10-Year Projection for Smoking Behavior in Iranian Adolescents

    PubMed Central

    Khosravi, Ahmad; Mansournia, Mohammad Ali; Mahmoodi, Mahmood; Pouyan, Ali Akbar; Holakouie-Naieni, Kourosh

    2016-01-01

    Background: Cigarette smoking is one of the most important health-related risk factors in terms of morbidity and mortality. In this study, we introduced a new method for deriving the transitional probabilities of smoking stages from a cross-sectional study and simulated a long-term smoking behavior for adolescents. Methods: In this study in 2010, a total of 4853 high school students were randomly selected and were completed a self-administered questionnaire about cigarette smoking. We used smoothed age- and sex-specific prevalence of smoking stages in a probabilistic discrete event system for estimating of transitional probabilities. A nonhomogenous discrete time Markov chain analysis was used to model the progression of the smoking in 10 years ahead in the same population. The mean age of the students was 15.69 ± 0.73 years (range: 14–19). Results: The smoothed prevalence proportion of current smoking varies between 3.58 and 26.14%. The age-adjusted odds of initiation in boys is 8.9 (95% confidence interval [CI]: 7.9–10.0) times of the odds of initiation of smoking in girls. Our study predicted that the prevalence proportion of current smokers increased from 7.55% in 2010 to 20.31% (95% CI: 19.44–21.37) for 2019. Conclusions: The present study showed a moderately but concerning prevalence of current smoking in Iranian adolescents and introduced a novel method for estimation of transitional probabilities from a cross-sectional study. The increasing trend of cigarette use among adolescents indicated the necessity of paying more attention to this group. PMID:27625766

  20. Estimating the Transitional Probabilities of Smoking Stages with Cross-sectional Data and 10-Year Projection for Smoking Behavior in Iranian Adolescents

    PubMed Central

    Khosravi, Ahmad; Mansournia, Mohammad Ali; Mahmoodi, Mahmood; Pouyan, Ali Akbar; Holakouie-Naieni, Kourosh

    2016-01-01

    Background: Cigarette smoking is one of the most important health-related risk factors in terms of morbidity and mortality. In this study, we introduced a new method for deriving the transitional probabilities of smoking stages from a cross-sectional study and simulated a long-term smoking behavior for adolescents. Methods: In this study in 2010, a total of 4853 high school students were randomly selected and were completed a self-administered questionnaire about cigarette smoking. We used smoothed age- and sex-specific prevalence of smoking stages in a probabilistic discrete event system for estimating of transitional probabilities. A nonhomogenous discrete time Markov chain analysis was used to model the progression of the smoking in 10 years ahead in the same population. The mean age of the students was 15.69 ± 0.73 years (range: 14–19). Results: The smoothed prevalence proportion of current smoking varies between 3.58 and 26.14%. The age-adjusted odds of initiation in boys is 8.9 (95% confidence interval [CI]: 7.9–10.0) times of the odds of initiation of smoking in girls. Our study predicted that the prevalence proportion of current smokers increased from 7.55% in 2010 to 20.31% (95% CI: 19.44–21.37) for 2019. Conclusions: The present study showed a moderately but concerning prevalence of current smoking in Iranian adolescents and introduced a novel method for estimation of transitional probabilities from a cross-sectional study. The increasing trend of cigarette use among adolescents indicated the necessity of paying more attention to this group.

  1. A computer procedure to analyze seismic data to estimate outcome probabilities in oil exploration, with an initial application in the tabasco region of southeastern Mexico

    NASA Astrophysics Data System (ADS)

    Berlanga, Juan M.; Harbaugh, John W.

    the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.

  2. Updating realistic access.

    PubMed

    Rossner, Mike

    2010-05-01

    Nearly six years ago Ira Mellman, then Editor-in-Chief of the JCB, published an editorial entitled "Providing realistic access" (1). It described the Journal's efforts to reconcile its subscription-based business model with the goal of providing public access to scholarly journal content. Since then, developments in the public-access movement are bringing us closer to the ideal of universal public access. But will there still be a place for selective journals like the JCB when we achieve that objective? PMID:20375430

  3. A realistic lattice example

    SciTech Connect

    Courant, E.D.; Garren, A.A.

    1985-10-01

    A realistic, distributed interaction region (IR) lattice has been designed that includes new components discussed in the June 1985 lattice workshop. Unlike the test lattices, the lattice presented here includes utility straights and the mechanism for crossing the beams in the experimental straights. Moreover, both the phase trombones and the dispersion suppressors contain the same bending as the normal cells. Vertically separated beams and 6 Tesla, 1-in-1 magnets are assumed. Since the cells are 200 meters long, and have 60 degree phase advance, this lattice has been named RLD1, in analogy with the corresponding test lattice, TLD1. The quadrupole gradient is 136 tesla/meter in the cells, and has similar values in other quadrupoles except in those in the IR`s, where the maximum gradient is 245 tesla/meter. RLD1 has distributed IR`s; however, clustered realistic lattices can easily be assembled from the same components, as was recently done in a version that utilizes the same type of experimental and utility straights as those of RLD1.

  4. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  5. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    -SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.

  6. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  7. Hate crimes and stigma-related experiences among sexual minority adults in the United States: prevalence estimates from a national probability sample.

    PubMed

    Herek, Gregory M

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed.

  8. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  9. Estimated Probability of Post-Wildfire Debris-Flow Occurrence and Estimated Volume of Debris Flows from a Pre-Fire Analysis in the Three Lakes Watershed, Grand County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Bossong, Clifford R.; Litke, David W.; Viger, Roland J.; Rupert, Michael G.; Char, Stephen J.

    2008-01-01

    Debris flows pose substantial threats to life, property, infrastructure, and water resources. Post-wildfire debris flows may be of catastrophic proportions compared to debris flows occurring in unburned areas. During 2006, the U.S. Geological Survey (USGS), in cooperation with the Northern Colorado Water Conservancy District, initiated a pre-wildfire study to determine the potential for post-wildfire debris flows in the Three Lakes watershed, Grand County, Colorado. The objective was to estimate the probability of post-wildfire debris flows and to estimate the approximate volumes of debris flows from 109 subbasins in the Three Lakes watershed in order to provide the Northern Colorado Water Conservancy District with a relative measure of which subbasins might constitute the most serious debris flow hazards. This report describes the results of the study and provides estimated probabilities of debris-flow occurrence and the estimated volumes of debris flow that could be produced in 109 subbasins of the watershed under an assumed moderate- to high-burn severity of all forested areas. The estimates are needed because the Three Lakes watershed includes communities and substantial water-resources and water-supply infrastructure that are important to residents both east and west of the Continental Divide. Using information provided in this report, land and water-supply managers can consider where to concentrate pre-wildfire planning, pre-wildfire preparedness, and pre-wildfire mitigation in advance of wildfires. Also, in the event of a large wildfire, this information will help managers identify the watersheds with the greatest post-wildfire debris-flow hazards.

  10. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  11. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.

  12. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities. PMID:27570097

  13. Predicted probabilities' relationship to inclusion probabilities.

    PubMed

    Fang, Di; Chong, Jenny; Wilson, Jeffrey R

    2015-05-01

    It has been shown that under a general multiplicative intercept model for risk, case-control (retrospective) data can be analyzed by maximum likelihood as if they had arisen prospectively, up to an unknown multiplicative constant, which depends on the relative sampling fraction. (1) With suitable auxiliary information, retrospective data can also be used to estimate response probabilities. (2) In other words, predictive probabilities obtained without adjustments from retrospective data will likely be different from those obtained from prospective data. We highlighted this using binary data from Medicare to determine the probability of readmission into the hospital within 30 days of discharge, which is particularly timely because Medicare has begun penalizing hospitals for certain readmissions. (3).

  14. Co-activation Probability Estimation (CoPE): An approach for modeling functional co-activation architecture based on neuroimaging coordinates.

    PubMed

    Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R; Liu, Yong; Yang, Yong; Eickhoff, Simon B; Jiang, Tianzi

    2015-08-15

    Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a 'core' co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies.

  15. Exploiting an ensemble of regional climate models to provide robust estimates of projected changes in monthly temperature and precipitation probability distribution functions

    NASA Astrophysics Data System (ADS)

    Tapiador, Francisco J.; Sánchez, Enrique; Romera, Raquel

    2009-01-01

    Regional climate models (RCMs) are dynamical downscaling tools aimed to improve the modelling of local physical processes. Ensembles of RCMs are widely used to improve the coarse-grain estimates of global climate models (GCMs) since the use of several RCMs helps to palliate uncertainties arising from different dynamical cores and numerical schemes methods. In this paper, we analyse the differences and similarities in the climate change response for an ensemble of heterogeneous RCMs forced by one GCM (HadAM3H), and one emissions scenario (IPCC's SRES-A2 scenario). As a difference with previous approaches using PRUDENCE database, the statistical description of climate characteristics is made through the spatial and temporal aggregation of the RCMs outputs into probability distribution functions (PDF) of monthly values. This procedure is a complementary approach to conventional seasonal analyses. Our results provide new, stronger evidence on expected marked regional differences in Europe in the A2 scenario in terms of precipitation and temperature changes. While we found an overall increase in the mean temperature and extreme values, we also found mixed regional differences for precipitation.

  16. First estimates of the probability of survival in a small-bodied, high-elevation frog (Boreal Chorus Frog, Pseudacris maculata), or how historical data can be useful

    USGS Publications Warehouse

    Muths, Erin L.; Scherer, R. D.; Amburgey, S. M.; Matthews, T.; Spencer, A. W.; Corn, P.S.

    2016-01-01

    In an era of shrinking budgets yet increasing demands for conservation, the value of existing (i.e., historical) data are elevated. Lengthy time series on common, or previously common, species are particularly valuable and may be available only through the use of historical information. We provide first estimates of the probability of survival and longevity (0.67–0.79 and 5–7 years, respectively) for a subalpine population of a small-bodied, ostensibly common amphibian, the Boreal Chorus Frog (Pseudacris maculata (Agassiz, 1850)), using historical data and contemporary, hypothesis-driven information–theoretic analyses. We also test a priori hypotheses about the effects of color morph (as suggested by early reports) and of drought (as suggested by recent climate predictions) on survival. Using robust mark–recapture models, we find some support for early hypotheses regarding the effect of color on survival, but we find no effect of drought. The congruence between early findings and our analyses highlights the usefulness of historical information in providing raw data for contemporary analyses and context for conservation and management decisions.

  17. Probability Estimates of Solar Particle Event Doses During a Period of Low Sunspot Number for Thinly-Shielded Spacecraft and Short Duration Missions

    NASA Technical Reports Server (NTRS)

    Atwell, William; Tylka, Allan J.; Dietrich, William; Rojdev, Kristina; Matzkind, Courtney

    2016-01-01

    In an earlier paper (Atwell, et al., 2015), we investigated solar particle event (SPE) radiation exposures (absorbed dose) to small, thinly-shielded spacecraft during a period when the sunspot number (SSN) was less than 30. These SPEs contain Ground Level Events (GLE), sub-GLEs, and sub-sub-GLEs (Tylka and Dietrich, 2009, Tylka and Dietrich, 2008, and Atwell, et al., 2008). GLEs are extremely energetic solar particle events having proton energies extending into the several GeV range and producing secondary particles in the atmosphere, mostly neutrons, observed with ground station neutron monitors. Sub-GLE events are less energetic, extending into the several hundred MeV range, but do not produce secondary atmospheric particles. Sub-sub GLEs are even less energetic with an observable increase in protons at energies greater than 30 MeV, but no observable proton flux above 300 MeV. In this paper, we consider those SPEs that occurred during 1973-2010 when the SSN was greater than 30 but less than 50. In addition, we provide probability estimates of absorbed dose based on mission duration with a 95% confidence level (CL). We also discuss the implications of these data and provide some recommendations that may be useful to spacecraft designers of these smaller spacecraft.

  18. Normal tissue complication probability model parameter estimation for xerostomia in head and neck cancer patients based on scintigraphy and quality of life assessments

    PubMed Central

    2012-01-01

    Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP

  19. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  20. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  1. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  2. Abstract Models of Probability

    NASA Astrophysics Data System (ADS)

    Maximov, V. M.

    2001-12-01

    Probability theory presents a mathematical formalization of intuitive ideas of independent events and a probability as a measure of randomness. It is based on axioms 1-5 of A.N. Kolmogorov 1 and their generalizations 2. Different formalized refinements were proposed for such notions as events, independence, random value etc., 2,3, whereas the measure of randomness, i.e. numbers from [0,1], remained unchanged. To be precise we mention some attempts of generalization of the probability theory with negative probabilities 4. From another side the physicists tryed to use the negative and even complex values of probability to explain some paradoxes in quantum mechanics 5,6,7. Only recently, the necessity of formalization of quantum mechanics and their foundations 8 led to the construction of p-adic probabilities 9,10,11, which essentially extended our concept of probability and randomness. Therefore, a natural question arises how to describe algebraic structures whose elements can be used as a measure of randomness. As consequence, a necessity arises to define the types of randomness corresponding to every such algebraic structure. Possibly, this leads to another concept of randomness that has another nature different from combinatorical - metric conception of Kolmogorov. Apparenly, discrepancy of real type of randomness corresponding to some experimental data lead to paradoxes, if we use another model of randomness for data processing 12. Algebraic structure whose elements can be used to estimate some randomness will be called a probability set Φ. Naturally, the elements of Φ are the probabilities.

  3. Design and Analysis of Salmonid Tagging Studies in the Columbia Basin, Volume VIII; New Model for Estimating Survival Probabilities and Residualization from a Release-Recapture Study of Fall Chinook Salmon Smolts in the Snake River, 1995 Technical Report.

    SciTech Connect

    Lowther, Alan B.; Skalski, John R.

    1997-09-01

    Standard release-recapture analysis using Cormack-Jolly-Seber (CJS) models to estimate survival probabilities between hydroelectric facilities for Snake River fall chinook salmon (Oncorhynchus tschawytscha) ignore the possibility of individual fish residualizing and completing their migration in the year following tagging.

  4. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  5. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  6. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    2003-01-01

    The T-year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T-year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at-site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100-year flood will occur on the average every 4,5 years.

  7. Estimating allele dropout probabilities by logistic regression: Assessments using Applied Biosystems 3500xL and 3130xl Genetic Analyzers with various commercially available human identification kits.

    PubMed

    Inokuchi, Shota; Kitayama, Tetsushi; Fujii, Koji; Nakahara, Hiroaki; Nakanishi, Hiroaki; Saito, Kazuyuki; Mizuno, Natsuko; Sekiguchi, Kazumasa

    2016-03-01

    Phenomena called allele dropouts are often observed in crime stain profiles. Allele dropouts are generated because one of a pair of heterozygous alleles is underrepresented by stochastic influences and is indicated by a low peak detection threshold. Therefore, it is important that such risks are statistically evaluated. In recent years, attempts to interpret allele dropout probabilities by logistic regression using the information on peak heights have been reported. However, these previous studies are limited to the use of a human identification kit and fragment analyzer. In the present study, we calculated allele dropout probabilities by logistic regression using contemporary capillary electrophoresis instruments, 3500xL Genetic Analyzer and 3130xl Genetic Analyzer with various commercially available human identification kits such as AmpFℓSTR® Identifiler® Plus PCR Amplification Kit. Furthermore, the differences in logistic curves between peak detection thresholds using analytical threshold (AT) and values recommended by the manufacturer were compared. The standard logistic curves for calculating allele dropout probabilities from the peak height of sister alleles were characterized. The present study confirmed that ATs were lower than the values recommended by the manufacturer in human identification kits; therefore, it is possible to reduce allele dropout probabilities and obtain more information using AT as the peak detection threshold.

  8. Determining Type Ia Supernova Host Galaxy Extinction Probabilities and a Statistical Approach to Estimating the Absorption-to-reddening Ratio RV

    NASA Astrophysics Data System (ADS)

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine

    2016-03-01

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B - V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, RV, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B - V) with RV = 3.1 and investigate the color excess probabilities E(B - V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa-Sap, Sab-Sbp, Sbc-Scp, Scd-Sdm, S0, and irregular galaxy classes as a function of R/R25. We find that the largest expected reddening probabilities are in Sab-Sb and Sbc-Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio RV using color excess probability functions and find values of RV = 2.71 ± 1.58 for 21 SNe Ia observed in Sab-Sbp galaxies, and RV = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc-Scp galaxies.

  9. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  10. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  11. My lived experiences are more important than your probabilities: The role of individualized risk estimates for decision making about participation in the Study of Tamoxifen and Raloxifene (STAR)

    PubMed Central

    Holmberg, Christine; Waters, Erika A.; Whitehouse, Katie; Daly, Mary; McCaskill-Stevens, Worta

    2015-01-01

    Background Decision making experts emphasize that understanding and using probabilistic information is important for making informed decisions about medical treatments involving complex risk-benefit tradeoffs. Yet empirical research demonstrates that individuals may not use probabilities when making decisions. Objectives To explore decision making and the use of probabilities for decision making from the perspective of women who were risk-eligible to enroll in the Study of Tamoxifen and Raloxifene (STAR). Methods We conducted narrative interviews with 20 women who agreed to participate in STAR and 20 women who declined. The project was based on a narrative approach. Analysis included the development of summaries of each narrative, and thematic analysis with developing a coding scheme inductively to code all transcripts to identify emerging themes. Results Interviewees explained and embedded their STAR decisions within experiences encountered throughout their lives. Such lived experiences included but were not limited to breast cancer family history, personal history of breast biopsies, and experiences or assumptions about taking tamoxifen or medicines more generally. Conclusions Women’s explanations of their decisions about participating in a breast cancer chemoprevention trial were more complex than decision strategies that rely solely on a quantitative risk-benefit analysis of probabilities derived from populations In addition to precise risk information, clinicians and risk communicators should recognize the importance and legitimacy of lived experience in individual decision making. PMID:26183166

  12. Minimizing the probable maximum flood

    SciTech Connect

    Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )

    1994-06-01

    This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.

  13. Estimating the probability of elevated nitrate (NO2+NO3-N) concentrations in ground water in the Columbia Basin Ground Water Management Area, Washington

    USGS Publications Warehouse

    Frans, Lonna M.

    2000-01-01

    Logistic regression was used to relate anthropogenic (man-made) and natural factors to the occurrence of elevated concentrations of nitrite plus nitrate as nitrogen in ground water in the Columbia Basin Ground Water Management Area, eastern Washington. Variables that were analyzed included well depth, depth of well casing, ground-water recharge rates, presence of canals, fertilizer application amounts, soils, surficial geology, and land-use types. The variables that best explain the occurrence of nitrate concentrations above 3 milligrams per liter in wells were the amount of fertilizer applied annually within a 2-kilometer radius of a well and the depth of the well casing; the variables that best explain the occurrence of nitrate above 10 milligrams per liter included the amount of fertilizer applied annually within a 3-kilometer radius of a well, the depth of the well casing, and the mean soil hydrologic group, which is a measure of soil infiltration rate. Based on the relations between these variables and elevated nitrate concentrations, models were developed using logistic regression that predict the probability that ground water will exceed a nitrate concentration of either 3 milligrams per liter or 10 milligrams per liter. Maps were produced that illustrate the predicted probability that ground-water nitrate concentrations will exceed 3 milligrams per liter or 10 milligrams per liter for wells cased to 78 feet below land surface (median casing depth) and the predicted depth to which wells would need to be cased in order to have an 80-percent probability of drawing water with a nitrate concentration below either 3 milligrams per liter or 10 milligrams per liter. Maps showing the predicted probability for the occurrence of elevated nitrate concentrations indicate that the irrigated agricultural regions are most at risk. The predicted depths to which wells need to be cased in order to have an 80-percent chance of obtaining low nitrate ground water exceed 600 feet

  14. The Probabilities of Unique Events

    PubMed Central

    Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil

    2012-01-01

    Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224

  15. Imprecise probabilities in engineering analyses

    NASA Astrophysics Data System (ADS)

    Beer, Michael; Ferson, Scott; Kreinovich, Vladik

    2013-05-01

    Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which involve both probabilistic and non-probabilistic information. A common feature of the various concepts of imprecise probabilities is the consideration of an entire set of probabilistic models in one analysis. The theoretical differences between the concepts mainly concern the mathematical description of the set of probabilistic models and the connection to the probabilistic models involved. This paper provides an overview on developments which involve imprecise probabilities for the solution of engineering problems. Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another. This paper was especially prepared for this special issue and reflects, in various ways, the thinking and presentation preferences of the authors, who are also the guest editors for this special issue.

  16. Simulation of realistic retinoscopic measurement

    NASA Astrophysics Data System (ADS)

    Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.

    2007-03-01

    Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.

  17. Electromagnetic Scattering from Realistic Targets

    NASA Technical Reports Server (NTRS)

    Lee, Shung- Wu; Jin, Jian-Ming

    1997-01-01

    The general goal of the project is to develop computational tools for calculating radar signature of realistic targets. A hybrid technique that combines the shooting-and-bouncing-ray (SBR) method and the finite-element method (FEM) for the radiation characterization of microstrip patch antennas in a complex geometry was developed. In addition, a hybridization procedure to combine moment method (MoM) solution and the SBR method to treat the scattering of waveguide slot arrays on an aircraft was developed. A list of journal articles and conference papers is included.

  18. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013

    USGS Publications Warehouse

    Eash, David A.

    2015-01-01

    An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1-4 from the 1987 single-variable RREs and for flood regions 1-3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.

  19. Arbuscular mycorrhizal propagules in soils from a tropical forest and an abandoned cornfield in Quintana Roo, Mexico: visual comparison of most-probable-number estimates.

    PubMed

    Ramos-Zapata, José A; Guadarrama, Patricia; Navarro-Alberto, Jorge; Orellana, Roger

    2011-02-01

    The present study was aimed at comparing the number of arbuscular mycorrhizal fungi (AMF) propagules found in soil from a mature tropical forest and that found in an abandoned cornfield in Noh-Bec Quintana Roo, Mexico, during three seasons. Agricultural practices can dramatically reduce the availability and viability of AMF propagules, and in this way delay the regeneration of tropical forests in abandoned agricultural areas. In addition, rainfall seasonality, which characterizes deciduous tropical forests, may strongly influence AMF propagules density. To compare AMF propagule numbers between sites and seasons (summer rainy, winter rainy and dry season), a "most probable number" (MPN) bioassay was conducted under greenhouse conditions employing Sorgum vulgare L. as host plant. Results showed an average value of 3.5 ± 0.41 propagules in 50 ml of soil for the mature forest while the abandoned cornfield had 15.4 ± 5.03 propagules in 50 ml of soil. Likelihood analysis showed no statistical differences in MPN of propagules between seasons within each site, or between sites, except for the summer rainy season for which soil from the abandoned cornfield had eight times as many propagules compared to soil from the mature forest site for this season. Propagules of arbuscular mycorrhizal fungi remained viable throughout the sampling seasons at both sites. Abandoned areas resulting from traditional slash and burn agriculture practices involving maize did not show a lower number of AMF propagules, which should allow the establishment of mycotrophic plants thus maintaining the AMF inoculum potential in these soils.

  20. Arbuscular mycorrhizal propagules in soils from a tropical forest and an abandoned cornfield in Quintana Roo, Mexico: visual comparison of most-probable-number estimates.

    PubMed

    Ramos-Zapata, José A; Guadarrama, Patricia; Navarro-Alberto, Jorge; Orellana, Roger

    2011-02-01

    The present study was aimed at comparing the number of arbuscular mycorrhizal fungi (AMF) propagules found in soil from a mature tropical forest and that found in an abandoned cornfield in Noh-Bec Quintana Roo, Mexico, during three seasons. Agricultural practices can dramatically reduce the availability and viability of AMF propagules, and in this way delay the regeneration of tropical forests in abandoned agricultural areas. In addition, rainfall seasonality, which characterizes deciduous tropical forests, may strongly influence AMF propagules density. To compare AMF propagule numbers between sites and seasons (summer rainy, winter rainy and dry season), a "most probable number" (MPN) bioassay was conducted under greenhouse conditions employing Sorgum vulgare L. as host plant. Results showed an average value of 3.5 ± 0.41 propagules in 50 ml of soil for the mature forest while the abandoned cornfield had 15.4 ± 5.03 propagules in 50 ml of soil. Likelihood analysis showed no statistical differences in MPN of propagules between seasons within each site, or between sites, except for the summer rainy season for which soil from the abandoned cornfield had eight times as many propagules compared to soil from the mature forest site for this season. Propagules of arbuscular mycorrhizal fungi remained viable throughout the sampling seasons at both sites. Abandoned areas resulting from traditional slash and burn agriculture practices involving maize did not show a lower number of AMF propagules, which should allow the establishment of mycotrophic plants thus maintaining the AMF inoculum potential in these soils. PMID:20714755

  1. Development of realistic RDD scenarios and their radiological consequence analyses.

    PubMed

    Shin, Hyeongki; Kim, Juyoul

    2009-01-01

    The terrorist attack on September 11, 2001, brought about deep interest on the radiological dispersal device (RDD) and the malevolent radiological event. In this study, realistic potential scenarios using RDD were developed. Among those probable radionuclides, (137)Cs and (241)Am were selected to simulate the radiological effects caused by dirty bomb. Their radiological consequences were assessed in terms of total effective dose, projected cumulative external and internal dose and ground deposition of radioactivity. PMID:19318261

  2. Development of realistic RDD scenarios and their radiological consequence analyses.

    PubMed

    Shin, Hyeongki; Kim, Juyoul

    2009-01-01

    The terrorist attack on September 11, 2001, brought about deep interest on the radiological dispersal device (RDD) and the malevolent radiological event. In this study, realistic potential scenarios using RDD were developed. Among those probable radionuclides, (137)Cs and (241)Am were selected to simulate the radiological effects caused by dirty bomb. Their radiological consequences were assessed in terms of total effective dose, projected cumulative external and internal dose and ground deposition of radioactivity.

  3. Probability distributions for magnetotellurics

    SciTech Connect

    Stodt, John A.

    1982-11-01

    Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.

  4. Using hierarchical Bayesian multi-species mixture models to estimate tandem hoop-net based habitat associations and detection probabilities of fishes in reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.

  5. Realistic Solar Surface Convection Simulations

    NASA Technical Reports Server (NTRS)

    Stein, Robert F.; Nordlund, Ake

    2000-01-01

    We perform essentially parameter free simulations with realistic physics of convection near the solar surface. We summarize the physics that is included and compare the simulation results with observations. Excellent agreement is obtained for the depth of the convection zone, the p-mode frequencies, the p-mode excitation rate, the distribution of the emergent continuum intensity, and the profiles of weak photospheric lines. We describe how solar convection is nonlocal. It is driven from a thin surface thermal boundary layer where radiative cooling produces low entropy gas which forms the cores of the downdrafts in which most of the buoyancy work occurs. We show that turbulence and vorticity are mostly confined to the intergranular lanes and underlying downdrafts. Finally, we illustrate our current work on magneto-convection.

  6. Holographic probabilities in eternal inflation.

    PubMed

    Bousso, Raphael

    2006-11-10

    In the global description of eternal inflation, probabilities for vacua are notoriously ambiguous. The local point of view is preferred by holography and naturally picks out a simple probability measure. It is insensitive to large expansion factors or lifetimes and so resolves a recently noted paradox. Any cosmological measure must be complemented with the probability for observers to emerge in a given vacuum. In lieu of anthropic criteria, I propose to estimate this by the entropy that can be produced in a local patch. This allows for prior-free predictions.

  7. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  8. Realistic camera noise modeling with application to improved HDR synthesis

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Aelterman, Jan; Pižurica, Aleksandra; Philips, Wilfried

    2012-12-01

    Due to the ongoing miniaturization of digital camera sensors and the steady increase of the "number of megapixels", individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, post-processing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is near-optimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.

  9. Towards a realistic echographic simulator.

    PubMed

    d'Aulignac, D; Laugier, C; Troccaz, J; Vieira, S

    2006-02-01

    Echography is a useful tool to diagnose a thrombosis; however, since it is difficult to learn to perform this procedure, the objective of this work is to create a simulation to allow students to practice in a virtual environment. Firstly, a physical model of the thigh was constructed based on experimental data obtained using a force sensor mounted on a robotic arm. We present a spring damper model consisting of both linear and non-linear elements. The parameters of each of these elements are then fitted to the experimental data using an optimization technique. By employing an implicit integration to solve the dynamics of the system we obtain a stable physical simulation at over 100 Hz. Secondly, a haptic interface was added to interact with the simulation. Using a PHANToM force-feedback device may touch and deform the thigh in real-time. In order to allow a realistic sensation of the contact we employ a local modeling technique allowing to approximate the forces at much higher frequency using a multi-threaded architecture. Finally, we present the basis for a fast echographic image generation depending on the position and orientation of the virtual probe as well as the force applied to it.

  10. Norwegian lottery winners: Cautious realists.

    PubMed

    Eckblad, G F; von der Lippe, A L

    1994-12-01

    The study investigated 261 lottery winners of prizes of NKR 1 million (US $150,000) or more in the years 1987-91 in a postal survey. The modal Norwegian winners were middle-aged married men of modest education, living in small communities. Emotional reactions to winning were few, aside from moderate happiness and relief. Winners emphasized caution, emotional control and unconspicuous spending, e.g. paying debts and sharing with children. There was only a slight increase in economic spending. A wish for anonymity was frequent, together with fear of envy from others. Betting was modest both before and after winning. Experiences with winning were predominantly positive. Life quality was stable or had improved. An age trend was observed, accounting for more variance than any other variable. The older winners seemed to represent a puritan subculture of caution, modesty and emotional restraint. A slightly more impatient pattern of spending was characteristic of younger winners. The results support Kaplan's 1987 and others' findings that lottery winners are not gamblers, but self-controlled realists and that tenacious, negative cultural expectations to the contrary are myths, but perhaps also deterrents of uncontrolled behavior.

  11. Causal effect models for realistic individualized treatment and intention to treat rules.

    PubMed

    van der Laan, Mark J; Petersen, Maya L

    2007-01-01

    Marginal structural models (MSM) are an important class of models in causal inference. Given a longitudinal data structure observed on a sample of n independent and identically distributed experimental units, MSM model the counterfactual outcome distribution corresponding with a static treatment intervention, conditional on user-supplied baseline covariates. Identification of a static treatment regimen-specific outcome distribution based on observational data requires, beyond the standard sequential randomization assumption, the assumption that each experimental unit has positive probability of following the static treatment regimen. The latter assumption is called the experimental treatment assignment (ETA) assumption, and is parameter-specific. In many studies the ETA is violated because some of the static treatment interventions to be compared cannot be followed by all experimental units, due either to baseline characteristics or to the occurrence of certain events over time. For example, the development of adverse effects or contraindications can force a subject to stop an assigned treatment regimen.In this article we propose causal effect models for a user-supplied set of realistic individualized treatment rules. Realistic individualized treatment rules are defined as treatment rules which always map into the set of possible treatment options. Thus, causal effect models for realistic treatment rules do not rely on the ETA assumption and are fully identifiable from the data. Further, these models can be chosen to generalize marginal structural models for static treatment interventions. The estimating function methodology of Robins and Rotnitzky (1992) (analogue to its application in Murphy, et. al. (2001) for a single treatment rule) provides us with the corresponding locally efficient double robust inverse probability of treatment weighted estimator.In addition, we define causal effect models for "intention-to-treat" regimens. The proposed intention

  12. Near-Unity Reaction Probability in Olefin Hydrogenation Promoted by Heterogeneous Metal Catalysts.

    PubMed

    Ebrahimi, Maryam; Simonovis, Juan Pablo; Zaera, Francisco

    2014-06-19

    The kinetics of the hydrogenation of ethylene on platinum surfaces was studied by using high-flux effusive molecular beams and reflection-absorption infrared spectroscopy (RAIRS). It was determined that steady-state ethylene conversion with probabilities close to unity could be achieved by using beams with ethylene fluxes equivalent to pressures in the mTorr range and high (≥100) H2:C2H4 ratios. The RAIRS data suggest that the high reaction probability is possible because such conditions lead to the removal of most of the ethylidyne layer known to form during catalysis. The observations from this study are contrasted with those under vacuum, where catalytic behavior is not sustainable, and with catalysis under more realistic atmospheric pressures, where reaction probabilities are estimated to be much lower (≤1 × 10(-5)).

  13. Failure probability of PWR reactor coolant loop piping. [Double-ended guillotine break

    SciTech Connect

    Lo, T.; Woo, H.H.; Holman, G.S.; Chou, C.K.

    1984-02-01

    This paper describes the results of assessments performed on the PWR coolant loop piping of Westinghouse and Combustion Engineering plants. For direct double-ended guillotine break (DEGB), consideration was given to crack existence probability, initial crack size distribution, hydrostatic proof test, preservice inspection, leak detection probability, crack growth characteristics, and failure criteria based on the net section stress failure and tearing modulus stability concept. For indirect DEGB, fragilities of major component supports were estimated. The system level fragility was then calculated based on the Boolean expression involving these fragilities. Indirect DEGB due to seismic effects was calculated by convolving the system level fragility and the seismic hazard curve. The results indicate that the probability of occurrence of both direct and indirect DEGB is extremely small, thus, postulation of DEGB in design should be eliminated and replaced by more realistic criteria.

  14. On Probability Domains III

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2015-12-01

    Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.

  15. On the Probability of Random Genetic Mutations for Various Types of Tumor Growth

    PubMed Central

    2013-01-01

    In this work, we consider the problem of estimating the probability for a specific random genetic mutation to be present in a tumor of a given size. Previous mathematical models have been based on stochastic methods where the tumor was assumed to be homogeneous and, on average, growing exponentially. In contrast, we are able to obtain analytical results for cases where the exponential growth of cancer has been replaced by other, arguably more realistic types of growth of a heterogeneous tumor cell population. Our main result is that the probability that a given random mutation will be present by the time a tumor reaches a certain size, is independent of the type of curve assumed for the average growth of the tumor, at least for a general class of growth curves. The same is true for the related estimate of the expected number of mutants present in a tumor of a given size, if mutants are indeed present. PMID:22311065

  16. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  17. Realistic Detectability of Close Interstellar Comets

    NASA Astrophysics Data System (ADS)

    Cook, Nathaniel V.; Ragozzine, Darin; Granvik, Mikael; Stephens, Denise C.

    2016-07-01

    During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (ICs) (also known as extra-solar planetesimals or extra-solar comets) would give us in situ information about the efficiency and properties of planet formation throughout the galaxy. However, no ICs have ever been detected, despite the fact that their hyperbolic orbits would make them readily identifiable as unrelated to the solar system. Moro-Martín et al. have made a detailed and reasonable estimate of the properties of the IC population. We extend their estimates of detectability with a numerical model that allows us to consider “close” ICs, e.g., those that come within the orbit of Jupiter. We include several constraints on a “detectable” object that allow for realistic estimates of the frequency of detections expected from the Large Synoptic Survey Telescope (LSST) and other surveys. The influence of several of the assumed model parameters on the frequency of detections is explored in detail. Based on the expectation from Moro-Martín et al., we expect that LSST will detect 0.001–10 ICs during its nominal 10 year lifetime, with most of the uncertainty from the unknown number density of small (nuclei of ˜0.1–1 km) ICs. Both asteroid and comet cases are considered, where the latter includes various empirical prescriptions of brightening. Using simulated LSST-like astrometric data, we study the problem of orbit determination for these bodies, finding that LSST could identify their orbits as hyperbolic and determine an ephemeris sufficiently accurate for follow-up in about 4–7 days. We give the hyperbolic orbital parameters of the most detectable ICs. Taking the results into consideration, we give recommendations to future searches for ICs.

  18. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  19. Evaluating species richness: biased ecological inference results from spatial heterogeneity in species detection probabilities

    USGS Publications Warehouse

    McNew, Lance B.; Handel, Colleen M.

    2015-01-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results

  20. Evaluating species richness: Biased ecological inference results from spatial heterogeneity in detection probabilities.

    PubMed

    McNew, Lance B; Handel, Colleen M

    2015-09-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multispecies occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real data set of bird observations in northwestern Alaska, USA, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our

  1. Evaluating species richness: Biased ecological inference results from spatial heterogeneity in detection probabilities.

    PubMed

    McNew, Lance B; Handel, Colleen M

    2015-09-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multispecies occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real data set of bird observations in northwestern Alaska, USA, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our

  2. Postpositivist Realist Theory: Identity and Representation Revisited

    ERIC Educational Resources Information Center

    Gilpin, Lorraine S.

    2006-01-01

    In postpositivist realist theory, people like Paula Moya (2000) and Satya Mohanty (2000) make a space that at once reflects and informs my location as a Third-World woman of color and a Black-immigrant educator in the United States. In postpositivist realist theory, understanding emerges from one's past and present experiences and interactions as…

  3. Binary neutron stars with realistic spin

    NASA Astrophysics Data System (ADS)

    Tichy, Wolfgang; Bernuzzi, Sebastiano; Dietrich, Tim; Bruegmann, Bernd

    2014-03-01

    Astrophysical neutron stars are expected to be spinning. We present the first, fully nonlinear general relativistic dynamical evolutions of the last three orbits for constraint satisfying initial data of spinning neutron star binaries, with astrophysically realistic spins aligned and anti-aligned to the orbital angular momentum. The dynamics of the systems are analyzed in terms of gauge-invariant binding energy vs. orbital angular momentum curves. By comparing to a binary black hole configuration we can estimate the different tidal and spin contributions to the binding energy for the first time. First results on the gravitational wave forms are presented. The phase evolution of the gravitational waves during the orbital motion is significantly affected by spin-orbit interactions, leading to delayed or early mergers. Furthermore, a frequency shift in the main emission mode of the hyper massive neutron star is observed. Our results suggest that a detailed modeling of merger waveforms requires the inclusion of spin, even for the moderate magnitudes observed in binary neutron star systems. This work was supported by NSF grants PHY-1204334, PHY-1305387 and DFG grant SFB/Transregio 7.

  4. Realistic costs of carbon capture

    SciTech Connect

    Al Juaied, Mohammed . Belfer Center for Science and International Affiaris); Whitmore, Adam )

    2009-07-01

    There is a growing interest in carbon capture and storage (CCS) as a means of reducing carbon dioxide (CO2) emissions. However there are substantial uncertainties about the costs of CCS. Costs for pre-combustion capture with compression (i.e. excluding costs of transport and storage and any revenue from EOR associated with storage) are examined in this discussion paper for First-of-a-Kind (FOAK) plant and for more mature technologies, or Nth-of-a-Kind plant (NOAK). For FOAK plant using solid fuels the levelised cost of electricity on a 2008 basis is approximately 10 cents/kWh higher with capture than for conventional plants (with a range of 8-12 cents/kWh). Costs of abatement are found typically to be approximately US$150/tCO2 avoided (with a range of US$120-180/tCO2 avoided). For NOAK plants the additional cost of electricity with capture is approximately 2-5 cents/kWh, with costs of the range of US$35-70/tCO2 avoided. Costs of abatement with carbon capture for other fuels and technologies are also estimated for NOAK plants. The costs of abatement are calculated with reference to conventional SCPC plant for both emissions and costs of electricity. Estimates for both FOAK and NOAK are mainly based on cost data from 2008, which was at the end of a period of sustained escalation in the costs of power generation plant and other large capital projects. There are now indications of costs falling from these levels. This may reduce the costs of abatement and costs presented here may be 'peak of the market' estimates. If general cost levels return, for example, to those prevailing in 2005 to 2006 (by which time significant cost escalation had already occurred from previous levels), then costs of capture and compression for FOAK plants are expected to be US$110/tCO2 avoided (with a range of US$90-135/tCO2 avoided). For NOAK plants costs are expected to be US$25-50/tCO2. Based on these considerations a likely representative range of costs of abatement from CCS excluding

  5. Probability on a Budget.

    ERIC Educational Resources Information Center

    Ewbank, William A.; Ginther, John L.

    2002-01-01

    Describes how to use common dice numbered 1-6 for simple mathematical situations including probability. Presents a lesson using regular dice and specially marked dice to explore some of the concepts of probability. (KHR)

  6. Is quantum probability rational?

    PubMed

    Houston, Alasdair I; Wiesner, Karoline

    2013-06-01

    We concentrate on two aspects of the article by Pothos & Busemeyer (P&B): the relationship between classical and quantum probability and quantum probability as a basis for rational decisions. We argue that the mathematical relationship between classical and quantum probability is not quite what the authors claim. Furthermore, it might be premature to regard quantum probability as the best practical rational scheme for decision making.

  7. Racing To Understand Probability.

    ERIC Educational Resources Information Center

    Van Zoest, Laura R.; Walker, Rebecca K.

    1997-01-01

    Describes a series of lessons designed to supplement textbook instruction of probability by addressing the ideas of "equally likely,""not equally likely," and "fairness," as well as to introduce the difference between theoretical and experimental probability. Presents four lessons using The Wind Racer games to study probability. (ASK)

  8. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  9. Searching with probabilities

    SciTech Connect

    Palay, A.J.

    1985-01-01

    This book examines how probability distributions can be used as a knowledge representation technique. It presents a mechanism that can be used to guide a selective search algorithm to solve a variety of tactical chess problems. Topics covered include probabilities and searching the B algorithm and chess probabilities - in practice, examples, results, and future work.

  10. Landslide Probability Assessment by the Derived Distributions Technique

    NASA Astrophysics Data System (ADS)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  11. In All Probability, Probability is not All

    ERIC Educational Resources Information Center

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  12. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  13. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  14. Probability of satellite collision

    NASA Technical Reports Server (NTRS)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  15. Factors influencing reporting and harvest probabilities in North American geese

    USGS Publications Warehouse

    Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F.; White, Gary C.; Caswell, D.F.

    2009-01-01

    We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.

  16. Teenagers' Perceived and Actual Probabilities of Pregnancy.

    ERIC Educational Resources Information Center

    Namerow, Pearila Brickner; And Others

    1987-01-01

    Explored adolescent females' (N=425) actual and perceived probabilities of pregnancy. Subjects estimated their likelihood of becoming pregnant the last time they had intercourse, and indicated the dates of last intercourse and last menstrual period. Found that the distributions of perceived probability of pregnancy were nearly identical for both…

  17. The uncertainty in earthquake conditional probabilities

    USGS Publications Warehouse

    Savage, J.C.

    1992-01-01

    The Working Group on California Earthquake Probabilities (WGCEP) questioned the relevance of uncertainty intervals assigned to earthquake conditional probabilities on the basis that the uncertainty in the probability estimate seemed to be greater the smaller the intrinsic breadth of the recurrence-interval distribution. It is shown here that this paradox depends upon a faulty measure of uncertainty in the conditional probability and that with a proper measure of uncertainty no paradox exists. The assertion that the WGCEP probability assessment in 1988 correctly forecast the 1989 Loma Prieta earthquake is also challenged by showing that posterior probability of rupture inferred after the occurrence of the earthquake from the prior WGCEP probability distribution reverts to a nearly informationless distribution. -Author

  18. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  19. Integrated statistical modelling of spatial landslide probability

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  20. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained. PMID:22558834

  1. Probability with Roulette

    ERIC Educational Resources Information Center

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  2. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  3. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  4. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  5. Keeping It Real: How Realistic Does Realistic Fiction for Children Need to Be?

    ERIC Educational Resources Information Center

    O'Connor, Barbara

    2010-01-01

    O'Connor, an author of realistic fiction for children, shares her attempts to strike a balance between carefree, uncensored, authentic, realistic writing and age-appropriate writing. Of course, complicating that balancing act is the fact that what seems age-appropriate to her might not seem so to everyone. O'Connor suggests that while it may be…

  6. Realistic Hot Water Draw Specification for Rating Solar Water Heaters: Preprint

    SciTech Connect

    Burch, J.

    2012-06-01

    In the United States, annual performance ratings for solar water heaters are simulated, using TMY weather and specified water draw. A more-realistic ratings draw is proposed that eliminates most bias by improving mains inlet temperature and by specifying realistic hot water use. This paper outlines the current and the proposed draws and estimates typical ratings changes from draw specification changes for typical systems in four cities.

  7. Model of lifetimes of the outer radiation belt electrons in a realistic magnetic field using realistic chorus wave parameters

    NASA Astrophysics Data System (ADS)

    Orlova, Ksenia; Shprits, Yuri

    2014-02-01

    The outer radiation belt electrons in the inner magnetosphere show high variability during the geomagnetically disturbed conditions. Quasi-linear diffusion theory provides both a framework for global prediction of particle loss at different energies and an understanding of the dynamics of different particle populations. It has been recently shown that the pitch angle scattering of electrons due to wave-particle interaction with chorus waves modeled in a realistic magnetic field may be significantly different from those estimated in a dipole model. In this work, we present the lifetimes of 1 keV-2 MeV electrons computed in the Tsyganenko 89 magnetic field model for the night, dawn, prenoon, and postnoon magnetic local time (MLT) sectors for different levels of geomagnetic activity and distances. The lifetimes in the realistic field are also compared to those computed in the dipole model. We develop a realistic chorus lower band and upper band wave models for each MLT sector using the recent statistical studies of wave amplitude, wave normal angle, and wave spectral density distributions as functions of magnetic latitude, distance, and Kp index. The increase of plasma trough density with increasing latitude is also included. The obtained in the Tsyganenko 89 field electron lifetimes are parameterized and can be used in 2-D/3-D/4-D convection and particle tracing codes.

  8. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  9. Faculty Development for Educators: A Realist Evaluation

    ERIC Educational Resources Information Center

    Sorinola, Olanrewaju O.; Thistlethwaite, Jill; Davies, David; Peile, Ed

    2015-01-01

    The effectiveness of faculty development (FD) activities for educators in UK medical schools remains underexplored. This study used a realist approach to evaluate FD and to test the hypothesis that motivation, engagement and perception are key mechanisms of effective FD activities. The authors observed and interviewed 33 course participants at one…

  10. Improving Intuition Skills with Realistic Mathematics Education

    ERIC Educational Resources Information Center

    Hirza, Bonita; Kusumah, Yaya S.; Darhim; Zulkardi

    2014-01-01

    The intention of the present study was to see the improvement of students' intuitive skills. This improvement was seen by comparing the Realistic Mathematics Education (RME)-based instruction with the conventional mathematics instruction. The subject of this study was 164 fifth graders of elementary school in Palembang. The design of this study…

  11. Making a Literature Methods Course "Realistic."

    ERIC Educational Resources Information Center

    Lewis, William J.

    Recognizing that it can be a challenge to make an undergraduate literature methods course realistic, a methods instructor at a Michigan university has developed three major and several minor activities that have proven effective in preparing pre-student teachers for the "real world" of teaching and, at the same time, have been challenging and…

  12. Satellite Maps Deliver More Realistic Gaming

    NASA Technical Reports Server (NTRS)

    2013-01-01

    When Redwood City, California-based Electronic Arts (EA) decided to make SSX, its latest snowboarding video game, it faced challenges in creating realistic-looking mountains. The solution was NASA's ASTER Global Digital Elevation Map, made available by the Jet Propulsion Laboratory, which EA used to create 28 real-life mountains from 9 different ranges for its award-winning game.

  13. Wiskobas and Freudenthal: Realistic Mathematics Education.

    ERIC Educational Resources Information Center

    Treffers, A.

    1993-01-01

    Freudenthal was the founder of realistic mathematics education, in which reality serves as a source of applications and learning. Takes a newspaper article about reproducing a Van Gogh painting using plants in a field to exemplify a rich context problem containing elements of all areas of elementary school mathematics. (MDH)

  14. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  15. Using Remote Sensing to Understand the Joint Probability of Extreme Rainfall and Flood Response

    NASA Astrophysics Data System (ADS)

    Wright, D. B.; Mantilla, R.; Peters-Lidard, C. D.

    2015-12-01

    Floods are the products of complex interactions between the highly variable spacetime structure of extreme rainfall with land surface and drainage network features at various scales. Precise description of these interactions has proven elusive, mainly due to the lack of sufficient spacetime rainfall information and relatively short and sparse observational records. Rainfall remote-sensing data archives are now reaching sufficient length to examine these interactions in greater detail. Long-standing precipitation-based flood hazard estimation practices such as design storms and Probable Maximum Precipitation rely on simplified assumptions to describe the interactions between extreme rainfall and flood response. In this study, the validity of these assumptions are explored using RainyDay, a probabilistic stochastic storm transposition framework developed at NASA's Goddard Space Flight Center for generating large numbers of rainfall "scenarios" using rainfall remote sensing data, each with realistic probability, intensity, and spacetime structure. RainyDay is coupled with NCEP Stage IV multisensor precipitation data and the Iowa Flood Center Model, an uncalibrated multiscale distributed hydrologic modeling platform. We study the relationship between simulated rainfall and peak discharge probability and intensity for a wide range of exceedance probabilities and for a number of nested subwatersheds of Turkey River in Northeastern Iowa, ranging in drainage area from 10 km2 to 4300 km2. The results demonstrate some interesting implications for the relationship between Probable Maximum Precipitation and the Probable Maximum Flood at a range of basin scales and highlight possible deficiencies in the standard approaches to compute these quantities. Satellite-based precipitation estimates with global coverage allow the extension of such understanding to data-poor regions.

  16. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  17. Tool for Generating Realistic Residential Hot Water Event Schedules: Preprint

    SciTech Connect

    Hendron, B.; Burch, J.; Barker, G.

    2010-08-01

    The installed energy savings for advanced residential hot water systems can depend greatly on detailed occupant use patterns. Quantifying these patterns is essential for analyzing measures such as tankless water heaters, solar hot water systems with demand-side heat exchangers, distribution system improvements, and recirculation loops. This paper describes the development of an advanced spreadsheet tool that can generate a series of year-long hot water event schedules consistent with realistic probability distributions of start time, duration and flow rate variability, clustering, fixture assignment, vacation periods, and seasonality. This paper also presents the application of the hot water event schedules in the context of an integral-collector-storage solar water heating system in a moderate climate.

  18. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  19. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  20. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  1. On Probability Domains

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  2. Understanding Y haplotype matching probability.

    PubMed

    Brenner, Charles H

    2014-01-01

    The Y haplotype population-genetic terrain is better explored from a fresh perspective rather than by analogy with the more familiar autosomal ideas. For haplotype matching probabilities, versus for autosomal matching probabilities, explicit attention to modelling - such as how evolution got us where we are - is much more important while consideration of population frequency is much less so. This paper explores, extends, and explains some of the concepts of "Fundamental problem of forensic mathematics - the evidential strength of a rare haplotype match". That earlier paper presented and validated a "kappa method" formula for the evidential strength when a suspect matches a previously unseen haplotype (such as a Y-haplotype) at the crime scene. Mathematical implications of the kappa method are intuitive and reasonable. Suspicions to the contrary raised in rest on elementary errors. Critical to deriving the kappa method or any sensible evidential calculation is understanding that thinking about haplotype population frequency is a red herring; the pivotal question is one of matching probability. But confusion between the two is unfortunately institutionalized in much of the forensic world. Examples make clear why (matching) probability is not (population) frequency and why uncertainty intervals on matching probabilities are merely confused thinking. Forensic matching calculations should be based on a model, on stipulated premises. The model inevitably only approximates reality, and any error in the results comes only from error in the model, the inexactness of the approximation. Sampling variation does not measure that inexactness and hence is not helpful in explaining evidence and is in fact an impediment. Alternative haplotype matching probability approaches that various authors have considered are reviewed. Some are based on no model and cannot be taken seriously. For the others, some evaluation of the models is discussed. Recent evidence supports the adequacy of

  3. Considerations for realistic ECCS evaluation methodology for LWRs

    SciTech Connect

    Rohatgi, U.S.; Saha, P.; Chexal, V.K.

    1985-01-01

    This paper identifies the various phenomena which govern the course of large and small break LOCAs in LWRs, and affect the key parameters such as Peak Clad Temperature (PCT) and timing of the end of blowdown, beginning of reflood, PCT, and complete quench. A review of the best-estimate models and correlations for these phenomena in the current literature has been presented. Finally, a set of models have been recommended which may be incorporated in a present best-estimate code such as TRAC or RELAP5 in order to develop a realistic ECCS evaluation methodology for future LWRs and have also been compared with the requirements of current ECCS evaluation methodology as outlined in Appendix K of 10CFR50. 58 refs.

  4. Spectral tunability of realistic plasmonic nanoantennas

    SciTech Connect

    Portela, Alejandro; Matsui, Hiroaki; Tabata, Hitoshi; Yano, Takaaki; Hayashi, Tomohiro; Hara, Masahiko; Santschi, Christian; Martin, Olivier J. F.

    2014-09-01

    Single nanoantenna spectroscopy was carried out on realistic dipole nanoantennas with various arm lengths and gap sizes fabricated by electron-beam lithography. A significant difference in resonance wavelength between realistic and ideal nanoantennas was found by comparing their spectral response. Consequently, the spectral tunability (96 nm) of the structures was significantly lower than that of simulated ideal nanoantennas. These observations, attributed to the nanofabrication process, are related to imperfections in the geometry, added metal adhesion layer, and shape modifications, which are analyzed in this work. Our results provide important information for the design of dipole nanoantennas clarifying the role of the structural modifications on the resonance spectra, as supported by calculations.

  5. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  6. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  7. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms. PMID:26828313

  8. Cathodic disbonding of pipeline coatings under realistic conditions

    NASA Astrophysics Data System (ADS)

    Trautman, Brenda Lee

    1998-09-01

    accelerate cathodic disbonding tests and is more representative of conditions in soils along pipelines. Wet/dry cycling showed no measurable effect on the extent of disbonding. Temperature, however, was determined to be a significant factor. The effect of initial electrolyte composition was not certain when comparing between NaCl and different soil extract solutions. Tests under realistic conditions generally exhibited larger scatter than standard tests, probably due to the added complexity caused by calcareous deposit formations and concurrent alteration of the electrolyte with the use of soil extract solutions.

  9. The spline probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam

    2012-06-01

    The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.

  10. Fractal probability laws.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2008-06-01

    We explore six classes of fractal probability laws defined on the positive half-line: Weibull, Frechét, Lévy, hyper Pareto, hyper beta, and hyper shot noise. Each of these classes admits a unique statistical power-law structure, and is uniquely associated with a certain operation of renormalization. All six classes turn out to be one-dimensional projections of underlying Poisson processes which, in turn, are the unique fixed points of Poissonian renormalizations. The first three classes correspond to linear Poissonian renormalizations and are intimately related to extreme value theory (Weibull, Frechét) and to the central limit theorem (Lévy). The other three classes correspond to nonlinear Poissonian renormalizations. Pareto's law--commonly perceived as the "universal fractal probability distribution"--is merely a special case of the hyper Pareto class.

  11. Waste Package Misload Probability

    SciTech Connect

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  12. Probability for Weather and Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of

  13. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  14. Probability of relativistic electron trapping by parallel and oblique whistler-mode waves in Earth's radiation belts

    SciTech Connect

    Artemyev, A. V. Vasiliev, A. A.; Neishtadt, A. I.; Mourenas, D.; Krasnoselskikh, V.

    2015-11-15

    We investigate electron trapping by high-amplitude whistler-mode waves propagating at small as well as large angles relative to geomagnetic field lines. The inhomogeneity of the background magnetic field can result in an effective acceleration of trapped particles. Here, we derive useful analytical expressions for the probability of electron trapping by both parallel and oblique waves, paving the way for a full analytical description of trapping effects on the particle distribution. Numerical integrations of particle trajectories allow to demonstrate the accuracy of the derived analytical estimates. For realistic wave amplitudes, the levels of probabilities of trapping are generally comparable for oblique and parallel waves, but they turn out to be most efficient over complementary energy ranges. Trapping acceleration of <100 keV electrons is mainly provided by oblique waves, while parallel waves are responsible for the trapping acceleration of >100 keV electrons.

  15. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  16. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  17. Quantum states prepared by realistic entanglement swapping

    SciTech Connect

    Scherer, Artur; Howard, Regina B.; Sanders, Barry C.; Tittel, Wolfgang

    2009-12-15

    Entanglement swapping between photon pairs is a fundamental building block in schemes using quantum relays or quantum repeaters to overcome the range limits of long-distance quantum key distribution. We develop a closed-form solution for the actual quantum states prepared by realistic entanglement swapping, which takes into account experimental deficiencies due to inefficient detectors, detector dark counts, and multiphoton-pair contributions of parametric down-conversion sources. We investigate how the entanglement present in the final state of the remaining modes is affected by the real-world imperfections. To test the predictions of our theory, comparison with previously published experimental entanglement swapping is provided.

  18. Adiabatic Hyperspherical Analysis of Realistic Nuclear Potentials

    NASA Astrophysics Data System (ADS)

    Daily, K. M.; Kievsky, Alejandro; Greene, Chris H.

    2015-12-01

    Using the hyperspherical adiabatic method with the realistic nuclear potentials Argonne V14, Argonne V18, and Argonne V18 with the Urbana IX three-body potential, we calculate the adiabatic potentials and the triton bound state energies. We find that a discrete variable representation with the slow variable discretization method along the hyperradial degree of freedom results in energies consistent with the literature. However, using a Laguerre basis results in missing energy, even when extrapolated to an infinite number of basis functions and channels. We do not include the isospin T = 3/2 contribution in our analysis.

  19. Oil spill contamination probability in the southeastern Levantine basin.

    PubMed

    Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam

    2015-02-15

    Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed.

  20. The realist interpretation of the atmosphere

    NASA Astrophysics Data System (ADS)

    Anduaga, Aitor

    The discovery of a clearly stratified structure of layers in the upper atmosphere has been--and still is--invoked too often as the great paradigm of atmospheric sciences in the 20th century. Behind this vision, an emphasis--or better, an overstatement--on the reality of the concept of layer lies. One of the few historians of physics who have not ignored this phenomenon of reification, C. Stewart Gillmor, attributed it to--somewhat ambiguous-- cultural (or perhaps, more generally, contextual) factors, though he never specified their nature. In this essay, I aim to demonstrate that, in the interwar years, most radiophysicists and some atomic physicists, for reasons principally related to extrinsic influences and to a lesser extent to internal developments of their own science, fervidly embraced a realist interpretation of the ionosphere. We will focus on the historical circumstances in which a specific social and commercial environment came to exert a strong influence on upper atmospheric physicists, and in which realism as a product validating the "truth" of certain practices and beliefs arose. This realist commitment I attribute to the mutual reinforcement of atmospheric physics and commercial and imperial interests in long-distance communications.

  1. Epidemiology and causation: a realist view.

    PubMed Central

    Renton, A

    1994-01-01

    In this paper the controversy over how to decide whether associations between factors and diseases are causal is placed within a description of the public health and scientific relevance of epidemiology. It is argued that the rise in popularity of the Popperian view of science, together with a perception of the aims of epidemiology as being to identify appropriate public health interventions, have focussed this debate on unresolved questions of inferential logic, leaving largely unanalysed the notions of causation and of disease at the ontological level. A realist ontology of causation of disease and pathogenesis is constructed within the framework of "scientific materialism", and is shown to provide a coherent basis from which to decide causes and to deal with problems of confounding and interaction in epidemiological research. It is argued that a realist analysis identifies a richer role for epidemiology as an integral part of an ontologically unified medical science. It is this unified medical science as a whole rather than epidemiological observation or experiment which decides causes and, in turn, provides a key element to the foundations of rational public health decision making. PMID:8138775

  2. Realistic Ground Motion Scenarios: Methodological Approach

    SciTech Connect

    Nunziata, C.; Peresan, A.; Romanelli, F.; Vaccari, F.; Zuccolo, E.; Panza, G. F.

    2008-07-08

    The definition of realistic seismic input can be obtained from the computation of a wide set of time histories, corresponding to possible seismotectonic scenarios. The propagation of the waves in the bedrock from the source to the local laterally varying structure is computed with the modal summation technique, while in the laterally heterogeneous structure the finite difference method is used. The definition of shear wave velocities within the soil cover is obtained from the non-linear inversion of the dispersion curve of group velocities of Rayleigh waves, artificially or naturally generated. Information about the possible focal mechanisms of the sources can be obtained from historical seismicity, based on earthquake catalogues and inversion of isoseismal maps. In addition, morphostructural zonation and pattern recognition of seismogenic nodes is useful to identify areas prone to strong earthquakes, based on the combined analysis of topographic, tectonic, geological maps and satellite photos. We show that the quantitative knowledge of regional geological structures and the computation of realistic ground motion can be a powerful tool for a preventive definition of the seismic hazard in Italy. Then, the formulation of reliable building codes, based on the evaluation of the main potential earthquakes, will have a great impact on the effective reduction of the seismic vulnerability of Italian urban areas, validating or improving the national building code.

  3. Realistic Radio Communications in Pilot Simulator Training

    NASA Technical Reports Server (NTRS)

    Burki-Cohen, Judith; Kendra, Andrew J.; Kanki, Barbara G.; Lee, Alfred T.

    2000-01-01

    Simulators used for total training and evaluation of airline pilots must satisfy stringent criteria in order to assure their adequacy for training and checking maneuvers. Air traffic control and company radio communications simulation, however, may still be left to role-play by the already taxed instructor/evaluators in spite of their central importance in every aspect of the flight environment. The underlying premise of this research is that providing a realistic radio communications environment would increase safety by enhancing pilot training and evaluation. This report summarizes the first-year efforts of assessing the requirement and feasibility of simulating radio communications automatically. A review of the training and crew resource/task management literature showed both practical and theoretical support for the need for realistic radio communications simulation. A survey of 29 instructor/evaluators from 14 airlines revealed that radio communications are mainly role-played by the instructor/evaluators. This increases instructor/evaluators' own workload while unrealistically lowering pilot communications load compared to actual operations, with a concomitant loss in training/evaluation effectiveness. A technology review searching for an automated means of providing radio communications to and from aircraft with minimal human effort showed that while promising, the technology is still immature. Further research and the need for establishing a proof-of-concept are also discussed.

  4. Emptiness Formation Probability

    NASA Astrophysics Data System (ADS)

    Crawford, Nicholas; Ng, Stephen; Starr, Shannon

    2016-08-01

    We present rigorous upper and lower bounds on the emptiness formation probability for the ground state of a spin-1/2 Heisenberg XXZ quantum spin system. For a d-dimensional system we find a rate of decay of the order {exp(-c L^{d+1})} where L is the sidelength of the box in which we ask for the emptiness formation event to occur. In the {d=1} case this confirms previous predictions made in the integrable systems community, though our bounds do not achieve the precision predicted by Bethe ansatz calculations. On the other hand, our bounds in the case {d ≥ 2} are new. The main tools we use are reflection positivity and a rigorous path integral expansion, which is a variation on those previously introduced by Toth, Aizenman-Nachtergaele and Ueltschi.

  5. Climate Sensitivity to Realistic Solar Heating of Snow and Ice

    NASA Astrophysics Data System (ADS)

    Flanner, M.; Zender, C. S.

    2004-12-01

    Snow and ice-covered surfaces are highly reflective and play an integral role in the planetary radiation budget. However, GCMs typically prescribe snow reflection and absorption based on minimal knowledge of snow physical characteristics. We performed climate sensitivity simulations with the NCAR CCSM including a new physically-based multi-layer snow radiative transfer model. The model predicts the effects of vertically resolved heating, absorbing aerosol, and snowpack transparency on snowpack evolution and climate. These processes significantly reduce the model's near-infrared albedo bias over deep snowpacks. While the current CCSM implementation prescribes all solar radiative absorption to occur in the top 2 cm of snow, we estimate that about 65% occurs beneath this level. Accounting for the vertical distribution of snowpack heating and more realistic reflectance significantly alters snowpack depth, surface albedo, and surface air temperature over Northern Hemisphere regions. Implications for the strength of the ice-albedo feedback will be discussed.

  6. Inclusion probability with dropout: an operational formula.

    PubMed

    Milot, E; Courteau, J; Crispino, F; Mailly, F

    2015-05-01

    In forensic genetics, a mixture of two or more contributors to a DNA profile is often interpreted using the inclusion probabilities theory. In this paper, we present a general formula for estimating the probability of inclusion (PI, also known as the RMNE probability) from a subset of visible alleles when dropouts are possible. This one-locus formula can easily be extended to multiple loci using the cumulative probability of inclusion. We show that an exact formulation requires fixing the number of contributors, hence to slightly modify the classic interpretation of the PI. We discuss the implications of our results for the enduring debate over the use of PI vs likelihood ratio approaches within the context of low template amplifications.

  7. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma.

  8. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma. PMID:27477933

  9. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  10. Simulation of realistic synthetic reflection sequences

    SciTech Connect

    Walden, A.T. )

    1993-04-01

    It is useful to be able to calculate synthetic primary reflection sequences from which to generate synthetic seismic sections which can be used for testing new processing algorithms. However, these synthetic reflection sequences should closely match real properties found in recent studies. Using the ARMA(1,1) model resulting from such studies to describe the correlation (or spectral) structure of the sequences, and by matching moments up to fourth order (since the sequences are non-Gaussian in practice), realistic sequences can be generated. A simple scheme is provided which also eliminate the necessity of throwing away large numbers of simulated values at start-up. The procedure is illustrated on three real sequences and is seen to reproduce all the important features.

  11. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  12. Helioseismology of a Realistic Magnetoconvective Sunspot Simulation

    NASA Technical Reports Server (NTRS)

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L., Jr.

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  13. HELIOSEISMOLOGY OF A REALISTIC MAGNETOCONVECTIVE SUNSPOT SIMULATION

    SciTech Connect

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L. Jr. E-mail: aaronb@cora.nwra.com E-mail: Thomas.L.Duvall@nasa.gov

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  14. Two Realistic Beagle Models for Dose Assessment.

    PubMed

    Stabin, Michael G; Kost, Susan D; Segars, William P; Guilmette, Raymond A

    2015-09-01

    Previously, the authors developed a series of eight realistic digital mouse and rat whole body phantoms based on NURBS technology to facilitate internal and external dose calculations in various species of rodents. In this paper, two body phantoms of adult beagles are described based on voxel images converted to NURBS models. Specific absorbed fractions for activity in 24 organs are presented in these models. CT images were acquired of an adult male and female beagle. The images were segmented, and the organs and structures were modeled using NURBS surfaces and polygon meshes. Each model was voxelized at a resolution of 0.75 × 0.75 × 2 mm. The voxel versions were implemented in GEANT4 radiation transport codes to calculate specific absorbed fractions (SAFs) using internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in both models. The SAFs for photons and electrons were compatible with results observed by others. Absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of these small-sized organs, and measurable cross irradiation was observed for many organ pairs for high-energy electrons (as would be emitted by nuclides like 32P, 90Y, or 188Re). The SAFs were used with standardized decay data to develop dose factors (DFs) for radiation dose calculations using the RADAR Method. These two new realistic models of male and female beagle dogs will be useful in radiation dosimetry calculations for external or internal simulated sources. PMID:26222214

  15. Demonstrating a Realistic IP Mission Prototype

    NASA Technical Reports Server (NTRS)

    Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith

    2003-01-01

    Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.

  16. People versus probability

    SciTech Connect

    Tonn, B.; Goeltz, R.

    1988-08-01

    This research explores how subjects combine estimates of uncertainty in probabilistic and evidential contexts. Two major computer programs written in Common Lisp asked subjects questions about the likelihoods and conjunctions of independent events. The results suggest that in the probabilistic context the best model to describe individual decision making is not the product rule but a minimum rule, and models varied as the magnitude of the likelihoods varied. In the evidential context, subjects appeared to use a maximum rule, although some evidence supports the use of the certainty factor rule. Subjects had difficulty in combining contradictory evidence. 26 refs., 8 tabs.

  17. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a

  18. On the probability of exceeding allowable leak rates through degraded steam generator tubes

    SciTech Connect

    Cizelj, L.; Sorsek, I.; Riesch-Oppermann, H.

    1997-02-01

    This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds the predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.

  19. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  20. MSPI False Indication Probability Simulations

    SciTech Connect

    Dana Kelly; Kurt Vedros; Robert Youngblood

    2011-03-01

    This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication is green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values

  1. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  2. Experiment design for measuring the probability of detection in remote sensing: how many objects and how many passes

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie M.; Morton, Kenneth D.

    2014-05-01

    Buried threat detection system (e.g., GPR, FLIR, EMI) performance can be summarized through two related statistics: the probability of detection (PD), and the false alarm rate (FAR). These statistics impact system rate of forward advance, clearance probability, and the overall usefulness of the system. Understanding system PD and FAR for each target type of interest is fundamental to making informed decisions regarding system procurement and deployment. Since PD and FAR cannot be measured directly, proper experimental design is required to ensure that estimates of PD and FAR are accurate. Given an unlimited number of target emplacements, estimating PD is straightforward. However in realistic scenarios with constrained budgets, limited experimental collection time and space, and limited number of targets, estimating PD becomes significantly more complicated. For example, it may be less expensive to collect data over the same exact target emplacement multiple times than to collect once over multiple unique target emplacements. Clearly there is a difference between the quantity and value of the information obtained from these two experiments (one collection over multiple objects, and multiple collections over one particular object). This work will clarify and quantify the amount of information gained from multiple data collections over one target compared to collecting over multiple unique target burials. Results provide a closed-form solution to estimating the relative value of collecting multiple times over one object, or emplacing a new object, and how to optimize experimental design to achieve stated goals and simultaneously minimize cost.

  3. The Probability of Causal Conditionals

    ERIC Educational Resources Information Center

    Over, David E.; Hadjichristidis, Constantinos; Evans, Jonathan St. B. T.; Handley, Simon J.; Sloman, Steven A.

    2007-01-01

    Conditionals in natural language are central to reasoning and decision making. A theoretical proposal called the Ramsey test implies the conditional probability hypothesis: that the subjective probability of a natural language conditional, P(if p then q), is the conditional subjective probability, P(q [such that] p). We report three experiments on…

  4. Inflation in Realistic D-Brane Models

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cline, J. M.; Stoica, H.; Quevedo, F.

    2004-09-01

    We find successful models of D-brane/anti-brane inflation within a string context. We work within the GKP-Bbb KLT class of type IIB string vacua for which many moduli are stabilized through fluxes, as recently modified to include `realistic' orbifold sectors containing standard-model type particles. We allow all moduli to roll when searching for inflationary solutions and find that inflation is not generic inasmuch as special choices must be made for the parameters describing the vacuum. But given these choices inflation can occur for a reasonably wide range of initial conditions for the brane and antibrane. We find that D-terms associated with the orbifold blowing-up modes play an important role in the inflationary dynamics. Since the models contain a standard-model-like sector after inflation, they open up the possibility of addressing reheating issues. We calculate predictions for the CMB temperature fluctuations and find that these can be consistent with observations, but are generically not deep within the scale-invariant regime and so can allow appreciable values for dns/dln k as well as predicting a potentially observable gravity-wave signal. It is also possible to generate some admixture of isocurvature fluctuations.

  5. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  6. Fractured shale reservoirs: Towards a realistic model

    SciTech Connect

    Hamilton-Smith, T.

    1996-09-01

    Fractured shale reservoirs are fundamentally unconventional, which is to say that their behavior is qualitatively different from reservoirs characterized by intergranular pore space. Attempts to analyze fractured shale reservoirs are essentially misleading. Reliance on such models can have only negative results for fractured shale oil and gas exploration and development. A realistic model of fractured shale reservoirs begins with the history of the shale as a hydrocarbon source rock. Minimum levels of both kerogen concentration and thermal maturity are required for effective hydrocarbon generation. Hydrocarbon generation results in overpressuring of the shale. At some critical level of repressuring, the shale fractures in the ambient stress field. This primary natural fracture system is fundamental to the future behavior of the fractured shale gas reservoir. The fractures facilitate primary migration of oil and gas out of the shale and into the basin. In this process, all connate water is expelled, leaving the fractured shale oil-wet and saturated with oil and gas. What fluids are eventually produced from the fractured shale depends on the consequent structural and geochemical history. As long as the shale remains hot, oil production may be obtained. (e.g. Bakken Shale, Green River Shale). If the shale is significantly cooled, mainly gas will be produced (e.g. Antrim Shale, Ohio Shale, New Albany Shale). Where secondary natural fracture systems are developed and connect the shale to aquifers or to surface recharge, the fractured shale will also produce water (e.g. Antrim Shale, Indiana New Albany Shale).

  7. Towards a realistic description of hadron resonances

    NASA Astrophysics Data System (ADS)

    Schmidt, R. A.; Canton, L.; Schweiger, W.; Plessas, W.

    2016-08-01

    We report on our attempts of treating excited hadron states as true quantum resonances. Hitherto the spectroscopy of mesons, usually considered as quark-antiquark systems, and of baryons, usually considered as three-quark systems, has been treated through excitation spectra of bound states (namely, confined few-quark systems), corresponding to poles of the quantum-mechanical resolvent at real negative values in the complex energy plane. As a result the wave functions, i.e. the residua of the resolvent, have not exhibited the behaviour as required for hadron resonances with their multiple decay modes. This has led to disturbing shortcomings in the description of hadronic resonance phenomena. We have aimed at a more realistic description of hadron resonances within relativistic constituent-quark models taking into account explicitly meson-decay channels. The corresponding coupled-channels theory is based on a relativistically invariant mass operator capable of producing hadron ground states with real energies and hadron resonances with complex energies, the latter corresponding to poles in the lower half-plane of the unphysical sheet of the complex energy plane. So far we have demonstrated the feasibility of the coupled-channels approach to hadron resonances along model calculations producing indeed the desired properties. The corresponding spectral properties will be discussed in this contribution. More refined studies are under way towards constructing a coupled-channels relativistic constituent-quark model for meson and baryon resonances.

  8. Determination of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Dietrich, Daniel L.; Ruff, Gary A.; Urban, David

    2013-01-01

    This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.

  9. Realistic Monte Carlo Simulation of PEN Apparatus

    NASA Astrophysics Data System (ADS)

    Glaser, Charles; PEN Collaboration

    2015-04-01

    The PEN collaboration undertook to measure the π+ -->e+νe(γ) branching ratio with a relative uncertainty of 5 ×10-4 or less at the Paul Scherrer Institute. This observable is highly susceptible to small non V - A contributions, i.e, non-Standard Model physics. The detector system included a beam counter, mini TPC for beam tracking, an active degrader and stopping target, MWPCs and a plastic scintillator hodoscope for particle tracking and identification, and a spherical CsI EM calorimeter. GEANT 4 Monte Carlo simulation is integral to the analysis as it is used to generate fully realistic events for all pion and muon decay channels. The simulated events are constructed so as to match the pion beam profiles, divergence, and momentum distribution. Ensuring the placement of individual detector components at the sub-millimeter level and proper construction of active target waveforms and associated noise, enables us to more fully understand temporal and geometrical acceptances as well as energy, time, and positional resolutions and calibrations in the detector system. This ultimately leads to reliable discrimination of background events, thereby improving cut based or multivariate branching ratio extraction. Work supported by NSF Grants PHY-0970013, 1307328, and others.

  10. Differentiability of correlations in realistic quantum mechanics

    SciTech Connect

    Cabrera, Alejandro; Faria, Edson de; Pujals, Enrique; Tresser, Charles

    2015-09-15

    We prove a version of Bell’s theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumed to be twice differentiable at the origin, then we arrive at a version of Bell’s theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.

  11. Physics and Probability

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Milonni, P. W.

    2004-12-01

    Preface; 1. Recollection of an independent thinker Joel A. Snow; 2. A look back: early applications of maximum entropy estimation to quantum statistical mechanics D. J. Scalapino; 3. The Jaynes-Cummings revival B. W. Shore and P. L. Knight; 4. The Jaynes-Cummings model and the one-atom-master H. Walther; 5. The Jaynes-Cummings model is alive and well P. Meystre; 6. Self-consistent radiation reaction in quantum optics - Jaynes' influence and a new example in cavity QED J. H. Eberly; 7. Enhancing the index of refraction in a nonabsorbing medium: phaseonium versus a mixture of two-level atoms M. O. Scully, T. W. Hänsch, M. Fleischhauer, C. H. Keitel and Shi-Yao Zhu; 8. Ed Jaynes' steak dinner problem II Michael D. Crisp; 9. Source theory of vacuum field effects Peter W. Milonni; 10. The natural line shape Edwin A. Power; 11. An operational approach to Schrödinger's cat L. Mandel; 12. The classical limit of an atom C. R. Stroud, Jr.; 13. Mutual radiation reaction in spontaneous emission Richard J. Cook; 14. A model of neutron star dynamics F. W. Cummings; 15. The kinematic origin of complex wave function David Hestenes; 16. On radar target identification C. Ray Smith; 17. On the difference in means G. Larry Bretthorst; 18. Bayesian analysis, model selection and prediction Arnold Zellner and Chung-ki Min; 19. Bayesian numerical analysis John Skilling; 20. Quantum statistical inference R. N. Silver; 21. Application of the maximum entropy principle to nonlinear systems far from equilibrium H. Haken; 22. Nonequilibrium statistical mechanics Baldwin Robertson; 23. A backward look to the future E. T. James; Appendix. Vita and bibliography of Edwin T. Jaynes; Index.

  12. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  13. Probability-summation model of multiple laser-exposure effects.

    PubMed

    Menendez, A R; Cheney, F E; Zuclich, J A; Crump, P

    1993-11-01

    A probability-summation model is introduced to provide quantitative criteria for discriminating independent from interactive effects of multiple laser exposures on biological tissue. Data that differ statistically from predictions of the probability-summation model indicate the action of sensitizing (synergistic/positive) or desensitizing (hardening/negative) biophysical interactions. Interactions are indicated when response probabilities vary with changes in the spatial or temporal separation of exposures. In the absence of interactions, probability-summation parsimoniously accounts for "cumulative" effects. Data analyzed using the probability-summation model show instances of both sensitization and desensitization of retinal tissue by laser exposures. Other results are shown to be consistent with probability-summation. The relevance of the probability-summation model to previous laser-bioeffects studies, models, and safety standards is discussed and an appeal is made for improved empirical estimates of response probabilities for single exposures.

  14. Gravitational radiation from realistic cosmic string loops

    NASA Astrophysics Data System (ADS)

    Casper, Paul; Allen, Bruce

    1995-10-01

    We examine the rates at which energy and momentum are radiated into gravitational waves by a large set of realistic cosmic string loops. The string loops are generated by numerically evolving parent loops with different initial conditions forward in time until they self-intersect, fragmenting into two child loops. The fragmentation of the child loops is followed recursively until only non-self-intersecting loops remain. The properties of the final non-self-intersecting loops are found to be independent of the initial conditions of the parent loops. We have calculated the radiated energy and momentum for a total of 11 625 stable child loops. We find that the majority of the final loops do not radiate significant amounts of spatial momentum. The velocity gained due to the rocket effect is typically small compared to the center-of-mass velocity of the fragmented loops. The distribution of gravitatoinal radiation rates in the center of mass frame of the loops, γ0≡(Gμ2)-1ΔE/Δτ, is strongly peaked in the range γ0=45-55 however, there are no loops found with γ0<40. Because the radiated spatial momentum is small, the distribution of gravitational radiation rates appears roughly the same in any reference frame. We conjecture that in the center-of-mass frame there is a lower bound γ0min>0 for the radiation rate from cosmic string loops. In a second conjecture, we identify a candidate for the loop with the minimal radiation rate and suggest that γ0min~=39.003.

  15. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  16. Comparison of validity of food group intake by food frequency questionnaire between pre- and post- adjustment estimates derived from 2-day 24-hour recalls in combination with the probability of consumption.

    PubMed

    Kim, Dong Woo; Oh, Se-Young; Kwon, Sung-Ok; Kim, Jeongseon

    2012-01-01

    Validation of a food frequency questionnaire (FFQ) utilising a short-term measurement method is challenging when the reference method does not accurately reflect the usual food intake. In addition, food group intake that is not consumed on daily basis is more critical when episodically consumed foods are related and compared. To overcome these challenges, several statistical approaches have been developed to determine usual food intake distributions. The Multiple Source Method (MSM) can calculate the usual food intake by combining the frequency questions of an FFQ with the short-term food intake amount data. In this study, we applied the MSM to estimate the usual food group intake and evaluate the validity of an FFQ with a group of 333 Korean children (aged 3-6 y) who completed two 24-hour recalls (24HR) and one FFQ in 2010. After adjusting the data using the MSM procedure, the true rate of non-consumption for all food groups was less than 1% except for the beans group. The median Spearman correlation coefficients against FFQ of the mean of 2-d 24HRs data and the MSM-adjusted data were 0.20 (range: 0.11 to 0.40) and 0.35 (range: 0.14 to 0.60), respectively. The weighted kappa values against FFQ ranged from 0.08 to 0.25 for the mean of 2-d 24HRs data and from 0.10 to 0.41 for the MSM-adjusted data. For most food groups, the MSM-adjusted data showed relatively stronger correlations against FFQ than raw 2-d 24HRs data, from 0.03 (beverages) to 0.34 (mushrooms). The results of this study indicated that the application of the MSM, which was a better estimate of the usual intake, could be worth considering in FFQ validation studies among Korean children.

  17. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  18. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  19. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  20. The probability of tropical cyclone landfalls in Western North Pacific

    NASA Astrophysics Data System (ADS)

    Bonazzi, A.; Bellone, E.; Khare, S.

    2012-04-01

    The Western North Pacific (WNP) is the most active basin in terms of tropical cyclone and typhoon occurrences. The densely populated countries that form the western boundary of WNP basin -- e.g. China, Japan and the Philippines -- are exposed to extreme wind gusts, storm surge and fresh water flooding eventually triggered by Tropical Cyclones (TC) events. Event-based catastrophe models (hereafter cat models) are extensively used by the insurance industry to manage their exposure against low-frequency/high-consequence events such as natural catastrophes. Cat models provide their users with a realistic set of stochastic events that expands the scope of a historical catalogue. Confidence in a cat model ability to extrapolate peril and loss statistics beyond the period covered by observational data requires good agreement between stochastic and historical peril characteristics at shorter return periods. In WNP risk management practitioners are faced with highly uncertain data to base their decisions. Albeit 4 national agencies maintain best track catalogues, data are generally based on satellite imageries with very limited central pressure (CP) and maximum velocity (VMAX) measurements -- regular flight reconnaissance missions stopped in 1987. As a result differences up to 20 knots are found in estimates of VMAX from different agencies as documented in experiment IOP-10 during Typhoon Megi in 2010. In this work we present a comprehensive analysis of CP and VMAX probability distributions at landfall across the WNP basin along a set of 150 gates (100 km coast segments) based on best track catalogues from Japan Meteorological Agency, Joint Typhoon Warning Center, China Meteorological Agency and Hong Meteorological Agency. Landfall distributions are then used to calibrate a random-walk statistical track model. A long simulation of 100,000 years of statistical TC tracks will ultimately constitute the central building block of a basin-wide stochastic catalogue of synthetic TC

  1. Tsunami probability in the Caribbean Region

    USGS Publications Warehouse

    Parsons, T.; Geist, E.L.

    2008-01-01

    We calculated tsunami runup probability (in excess of 0.5 m) at coastal sites throughout the Caribbean region. We applied a Poissonian probability model because of the variety of uncorrelated tsunami sources in the region. Coastlines were discretized into 20 km by 20 km cells, and the mean tsunami runup rate was determined for each cell. The remarkable ???500-year empirical record compiled by O'Loughlin and Lander (2003) was used to calculate an empirical tsunami probability map, the first of three constructed for this study. However, it is unclear whether the 500-year record is complete, so we conducted a seismic moment-balance exercise using a finite-element model of the Caribbean-North American plate boundaries and the earthquake catalog, and found that moment could be balanced if the seismic coupling coefficient is c = 0.32. Modeled moment release was therefore used to generate synthetic earthquake sequences to calculate 50 tsunami runup scenarios for 500-year periods. We made a second probability map from numerically-calculated runup rates in each cell. Differences between the first two probability maps based on empirical and numerical-modeled rates suggest that each captured different aspects of tsunami generation; the empirical model may be deficient in primary plate-boundary events, whereas numerical model rates lack backarc fault and landslide sources. We thus prepared a third probability map using Bayesian likelihood functions derived from the empirical and numerical rate models and their attendant uncertainty to weight a range of rates at each 20 km by 20 km coastal cell. Our best-estimate map gives a range of 30-year runup probability from 0 - 30% regionally. ?? irkhaueser 2008.

  2. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  3. Family Relationships in Realistic Young Adult Fiction, 1987 to 1991.

    ERIC Educational Resources Information Center

    Sampson, Cathie

    The purpose of this study was to determine how parents and family relationships are characterized in realistic young adult fiction. A random sample of 20 realistic young adult novels was selected from the American Library Association's Best Lists for the years 1987-1991. A content analysis of the novels focused on the following: (1) whether…

  4. Evolution of migration rate in a spatially realistic metapopulation model.

    PubMed

    Heino, M; Hanski, I

    2001-05-01

    We use an individual-based, spatially realistic metapopulation model to study the evolution of migration rate. We first explore the consequences of habitat change in hypothetical patch networks on a regular lattice. If the primary consequence of habitat change is an increase in local extinction risk as a result of decreased local population sizes, migration rate increases. A nonmonotonic response, with migration rate decreasing at high extinction rate, was obtained only by assuming very frequent catastrophes. If the quality of the matrix habitat deteriorates, leading to increased mortality during migration, the evolutionary response is more complex. As long as habitat patch occupancy does not decrease markedly with increased migration mortality, reduced migration rate evolves. However, once mortality becomes so high that empty patches remain uncolonized for a long time, evolution tends to increase migration rate, which may lead to an "evolutionary rescue" in a fragmented landscape. Kin competition has a quantitative effect on the evolution of migration rate in our model, but these patterns in the evolution of migration rate appear to be primarily caused by spatiotemporal variation in fitness and mortality during migration. We apply the model to real habitat patch networks occupied by two checkerspot butterfly (Melitaea) species, for which sufficient data are available to estimate rigorously most of the model parameters. The model-predicted migration rate is not significantly different from the empirically observed one. Regional variation in patch areas and connectivities leads to regional variation in the optimal migration rate, predictions that can be tested empirically. PMID:18707258

  5. Review of Literature for Model Assisted Probability of Detection

    SciTech Connect

    Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.

    2014-09-30

    This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.

  6. Applications of the Dirichlet distribution to forensic match probabilities.

    PubMed

    Lange, K

    1995-01-01

    The Dirichlet distribution provides a convenient conjugate prior for Bayesian analyses involving multinomial proportions. In particular, allele frequency estimation can be carried out with a Dirichlet prior. If data from several distinct populations are available, then the parameters characterizing the Dirichlet prior can be estimated by maximum likelihood and then used for allele frequency estimation in each of the separate populations. This empirical Bayes procedure tends to moderate extreme multinomial estimates based on sample proportions. The Dirichlet distribution can also be employed to model the contributions from different ancestral populations in computing forensic match probabilities. If the ancestral populations are in genetic equilibrium, then the product rule for computing match probabilities is valid conditional on the ancestral contributions to a typical person of the reference population. This fact facilitates computation of match probabilities and tight upper bounds to match probabilities.

  7. Probability Interpretation of Quantum Mechanics.

    ERIC Educational Resources Information Center

    Newton, Roger G.

    1980-01-01

    This paper draws attention to the frequency meaning of the probability concept and its implications for quantum mechanics. It emphasizes that the very meaning of probability implies the ensemble interpretation of both pure and mixed states. As a result some of the "paradoxical" aspects of quantum mechanics lose their counterintuitive character.…

  8. The Probabilities of Conditionals Revisited

    ERIC Educational Resources Information Center

    Douven, Igor; Verbrugge, Sara

    2013-01-01

    According to what is now commonly referred to as "the Equation" in the literature on indicative conditionals, the probability of any indicative conditional equals the probability of its consequent of the conditional given the antecedent of the conditional. Philosophers widely agree in their assessment that the triviality arguments of…

  9. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure analysis. (a) General. All flight safety analyses for a launch, regardless of hazard or phase of flight... probability estimate must use accurate data, scientific principles, and a method that is statistically...

  10. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure analysis. (a) General. All flight safety analyses for a launch, regardless of hazard or phase of flight... probability estimate must use accurate data, scientific principles, and a method that is statistically...

  11. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure analysis. (a) General. All flight safety analyses for a launch, regardless of hazard or phase of flight... probability estimate must use accurate data, scientific principles, and a method that is statistically...

  12. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure analysis. (a) General. All flight safety analyses for a launch, regardless of hazard or phase of flight... probability estimate must use accurate data, scientific principles, and a method that is statistically...

  13. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.224 Probability of failure analysis. (a) General. All flight safety analyses for a launch, regardless of hazard or phase of flight... probability estimate must use accurate data, scientific principles, and a method that is statistically...

  14. On the Provenance of Judgments of Conditional Probability

    ERIC Educational Resources Information Center

    Zhao, Jiaying; Shah, Anuj; Osherson, Daniel

    2009-01-01

    In standard treatments of probability, Pr(A[vertical bar]B) is defined as the ratio of Pr(A[intersection]B) to Pr(B), provided that Pr(B) greater than 0. This account of conditional probability suggests a psychological question, namely, whether estimates of Pr(A[vertical bar]B) arise in the mind via implicit calculation of…

  15. On the shape of the probability weighting function.

    PubMed

    Gonzalez, R; Wu, G

    1999-02-01

    Empirical studies have shown that decision makers do not usually treat probabilities linearly. Instead, people tend to overweight small probabilities and underweight large probabilities. One way to model such distortions in decision making under risk is through a probability weighting function. We present a nonparametric estimation procedure for assessing the probability weighting function and value function at the level of the individual subject. The evidence in the domain of gains supports a two-parameter weighting function, where each parameter is given a psychological interpretation: one parameter measures how the decision maker discriminates probabilities, and the other parameter measures how attractive the decision maker views gambling. These findings are consistent with a growing body of empirical and theoretical work attempting to establish a psychological rationale for the probability weighting function. PMID:10090801

  16. Reliable biological communication with realistic constraints.

    PubMed

    de Polavieja, Gonzalo G

    2004-12-01

    Communication in biological systems must deal with noise and metabolic or temporal constraints. We include these constraints into information theory to obtain the distributions of signal usage corresponding to a maximal rate of information transfer given any noise structure and any constraints. Generalized versions of the Boltzmann, Gaussian, or Poisson distributions are obtained for linear, quadratic and temporal constraints, respectively. These distributions are shown to imply that biological transformations must dedicate a larger output range to the more probable inputs and less to the outputs with higher noise and higher participation in the constraint. To show the general theory of reliable communication at work, we apply these results to biochemical and neuronal signaling. Noncooperative enzyme kinetics is shown to be suited for transfer of a high signal quality when the input distribution has a maximum at low concentrations while cooperative kinetics for near-Gaussian input statistics. Neuronal codes based on spike rates, spike times or bursts have to balance signal quality and cost-efficiency and at the network level imply sparseness and uncorrelation within the limits of noise, cost, and processing operations. PMID:15697405

  17. Quantum computing with realistically noisy devices.

    PubMed

    Knill, E

    2005-03-01

    In theory, quantum computers offer a means of solving problems that would be intractable on conventional computers. Assuming that a quantum computer could be constructed, it would in practice be required to function with noisy devices called 'gates'. These gates cause decoherence of the fragile quantum states that are central to the computer's operation. The goal of so-called 'fault-tolerant quantum computing' is therefore to compute accurately even when the error probability per gate (EPG) is high. Here we report a simple architecture for fault-tolerant quantum computing, providing evidence that accurate quantum computing is possible for EPGs as high as three per cent. Such EPGs have been experimentally demonstrated, but to avoid excessive resource overheads required by the necessary architecture, lower EPGs are needed. Assuming the availability of quantum resources comparable to the digital resources available in today's computers, we show that non-trivial quantum computations at EPGs of as high as one per cent could be implemented.

  18. RADAR Realistic Animal Model Series for Dose Assessment

    PubMed Central

    Keenan, Mary A.; Stabin, Michael G.; Segars, William P.; Fernald, Michael J.

    2010-01-01

    Rodent species are widely used in the testing and approval of new radiopharmaceuticals, necessitating murine phantom models. As more therapy applications are being tested in animal models, calculating accurate dose estimates for the animals themselves becomes important to explain and control potential radiation toxicity or treatment efficacy. Historically, stylized and mathematically based models have been used for establishing doses to small animals. Recently, a series of anatomically realistic human phantoms was developed using body models based on nonuniform rational B-spline. Realistic digital mouse whole-body (MOBY) and rat whole-body (ROBY) phantoms were developed on the basis of the same NURBS technology and were used in this study to facilitate dose calculations in various species of rodents. Methods Voxel-based versions of scaled MOBY and ROBY models were used with the Vanderbilt multinode computing network (Advanced Computing Center for Research and Education), using geometry and tracking radiation transport codes to calculate specific absorbed fractions (SAFs) with internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in all models. Results The SAF results were compared with values from similar studies found in reference literature. Also, the SAFs were used with standardized decay data to develop dose factors to be used in radiation dose calculations. Representative plots were made of photon electron SAFs, evaluating the traditional assumption that all electron energy is absorbed in the source organs. Conclusion The organ masses in the MOBY and ROBY models are in reasonable agreement with models presented by other investigators noting that considerable variation can occur between reported masses. Results consistent with those found by other investigators show that absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of

  19. The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective.

    PubMed

    Porter, Sam; O'Halloran, Peter

    2012-03-01

    The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective In this paper, we assess realistic evaluation's articulation with evidence-based practice (EBP) from the perspective of critical realism. We argue that the adoption by realistic evaluation of a realist causal ontology means that it is better placed to explain complex healthcare interventions than the traditional method used by EBP, the randomized controlled trial (RCT). However, we do not conclude from this that the use of RCTs is without merit, arguing that it is possible to use both methods in combination under the rubric of realist theory. More negatively, we contend that the rejection of critical theory and utopianism by realistic evaluation in favour of the pragmatism of piecemeal social engineering means that it is vulnerable to accusations that it promotes technocratic interpretations of human problems. We conclude that, insofar as realistic evaluation adheres to the ontology of critical realism, it provides a sound contribution to EBP, but insofar as it rejects the critical turn of Bhaskar's realism, it replicates the technocratic tendencies inherent in EBP. PMID:22212367

  20. The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective.

    PubMed

    Porter, Sam; O'Halloran, Peter

    2012-03-01

    The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective In this paper, we assess realistic evaluation's articulation with evidence-based practice (EBP) from the perspective of critical realism. We argue that the adoption by realistic evaluation of a realist causal ontology means that it is better placed to explain complex healthcare interventions than the traditional method used by EBP, the randomized controlled trial (RCT). However, we do not conclude from this that the use of RCTs is without merit, arguing that it is possible to use both methods in combination under the rubric of realist theory. More negatively, we contend that the rejection of critical theory and utopianism by realistic evaluation in favour of the pragmatism of piecemeal social engineering means that it is vulnerable to accusations that it promotes technocratic interpretations of human problems. We conclude that, insofar as realistic evaluation adheres to the ontology of critical realism, it provides a sound contribution to EBP, but insofar as it rejects the critical turn of Bhaskar's realism, it replicates the technocratic tendencies inherent in EBP.

  1. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  2. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  3. Logic, probability, and human reasoning.

    PubMed

    Johnson-Laird, P N; Khemlani, Sangeet S; Goodwin, Geoffrey P

    2015-04-01

    This review addresses the long-standing puzzle of how logic and probability fit together in human reasoning. Many cognitive scientists argue that conventional logic cannot underlie deductions, because it never requires valid conclusions to be withdrawn - not even if they are false; it treats conditional assertions implausibly; and it yields many vapid, although valid, conclusions. A new paradigm of probability logic allows conclusions to be withdrawn and treats conditionals more plausibly, although it does not address the problem of vapidity. The theory of mental models solves all of these problems. It explains how people reason about probabilities and postulates that the machinery for reasoning is itself probabilistic. Recent investigations accordingly suggest a way to integrate probability and deduction.

  4. Dinosaurs, Dinosaur Eggs, and Probability.

    ERIC Educational Resources Information Center

    Teppo, Anne R.; Hodgson, Ted

    2001-01-01

    Outlines several recommendations for teaching probability in the secondary school. Offers an activity that employs simulation by hand and using a programmable calculator in which geometry, analytical geometry, and discrete mathematics are explored. (KHR)

  5. Effects of realistic tensor force on nuclear structure

    SciTech Connect

    Nakada, H.

    2012-10-20

    First-order tensor-force effects on nuclear structure are investigated in the self-consistent mean-field and RPA calculations with the M3Y-type semi-realistic interactions, which contain the realistic tensor force. The tensor force plays a key role in Z- or N-dependence of the shell structure, and in transitions involving spin degrees-of-freedom. It is demonstrated that the semi-realistic interactions successfully describe the N-dependence of the shell structure in the proton-magic nuclei (e.g. Ca and Sn), and the magnetic transitions (e.g. M1 transition in {sup 208}Pb).

  6. Construction of realistic images using R-functions

    SciTech Connect

    Shevchenko, A.N.; Tsukanov, I.G.

    1995-09-01

    Realistic images are plane images of three-dimensional bodies in which volume effects are conveyed by illumination. This is how volume is displayed in photographs and paintings. Photographs achieve a realistic volume effect by choosing a certain arrangement, brightness, and number of light sources. Painters choose for their paintings a color palette based entirely on sensory perception. In this paper, we consider the construction of realistic images on a computer display. The shape of the imaged objects is not always known in advance: it may be generated as a result of complex mathematical computations. The geometrical information is described using R-functions.

  7. Joint probabilities and quantum cognition

    SciTech Connect

    Acacio de Barros, J.

    2012-12-18

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  8. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  9. Surprisingly rational: probability theory plus noise explains biases in judgment.

    PubMed

    Costello, Fintan; Watts, Paul

    2014-07-01

    The systematic biases seen in people's probability judgments are typically taken as evidence that people do not use the rules of probability theory when reasoning about probability but instead use heuristics, which sometimes yield reasonable judgments and sometimes yield systematic biases. This view has had a major impact in economics, law, medicine, and other fields; indeed, the idea that people cannot reason with probabilities has become a truism. We present a simple alternative to this view, where people reason about probability according to probability theory but are subject to random variation or noise in the reasoning process. In this account the effect of noise is canceled for some probabilistic expressions. Analyzing data from 2 experiments, we find that, for these expressions, people's probability judgments are strikingly close to those required by probability theory. For other expressions, this account produces systematic deviations in probability estimates. These deviations explain 4 reliable biases in human probabilistic reasoning (conservatism, subadditivity, conjunction, and disjunction fallacies). These results suggest that people's probability judgments embody the rules of probability theory and that biases in those judgments are due to the effects of random noise. PMID:25090427

  10. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  11. Application of ICA to realistically simulated 1H-MRS data

    PubMed Central

    Kalyanam, Ravi; Boutte, David; Hutchison, Kent E; Calhoun, Vince D

    2015-01-01

    Introduction 1H-MRS signals from brain tissues capture information on in vivo brain metabolism and neuronal biomarkers. This study aims to advance the use of independent component analysis (ICA) for spectroscopy data by objectively comparing the performance of ICA and LCModel in analyzing realistic data that mimics many of the known properties of in vivo data. Methods This work identifies key features of in vivo 1H-MRS signals and presents methods to simulate realistic data, using a basis set of 12 metabolites typically found in the human brain. The realistic simulations provide a much needed ground truth to evaluate performances of various MRS analysis methods. ICA is applied to collectively analyze multiple realistic spectra and independent components identified with our generative model to obtain ICA estimates. These same data are also analyzed using LCModel and the comparisons between the ground-truth and the analysis estimates are presented. The study also investigates the potential impact of modeling inaccuracies by incorporating two sets of model resonances in simulations. Results The simulated fid signals incorporating line broadening, noise, and residual water signal closely resemble the in vivo signals. Simulation analyses show that the resolution performances of both LCModel and ICA are not consistent across metabolites and that while ICA resolution can be improved for certain resonances, ICA is as effective as, or better than, LCModel in resolving most model resonances. Conclusion The results show that ICA can be an effective tool in comparing multiple spectra and complements existing approaches for providing quantified estimates. PMID:26221570

  12. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality. PMID:19839686

  13. The role of ANS acuity and numeracy for the calibration and the coherence of subjective probability judgments

    PubMed Central

    Winman, Anders; Juslin, Peter; Lindskog, Marcus; Nilsson, Håkan; Kerimi, Neda

    2014-01-01

    The purpose of the study was to investigate how numeracy and acuity of the approximate number system (ANS) relate to the calibration and coherence of probability judgments. Based on the literature on number cognition, a first hypothesis was that those with lower numeracy would maintain a less linear use of the probability scale, contributing to overconfidence and nonlinear calibration curves. A second hypothesis was that also poorer acuity of the ANS would be associated with overconfidence and non-linearity. A third hypothesis, in line with dual-systems theory (e.g., Kahneman and Frederick, 2002) was that people higher in numeracy should have better access to the normative probability rules, allowing them to decrease the rate of conjunction fallacies. Data from 213 participants sampled from the Swedish population showed that: (i) in line with the first hypothesis, overconfidence and the linearity of the calibration curves were related to numeracy, where people higher in numeracy were well calibrated with zero overconfidence. (ii) ANS was not associated with overconfidence and non-linearity, disconfirming the second hypothesis. (iii) The rate of conjunction fallacies was slightly, but to a statistically significant degree decreased by numeracy, but still high at all numeracy levels. An unexpected finding was that participants with better ANS acuity gave more realistic estimates of their performance relative to others. PMID:25140163

  14. The role of ANS acuity and numeracy for the calibration and the coherence of subjective probability judgments.

    PubMed

    Winman, Anders; Juslin, Peter; Lindskog, Marcus; Nilsson, Håkan; Kerimi, Neda

    2014-01-01

    The purpose of the study was to investigate how numeracy and acuity of the approximate number system (ANS) relate to the calibration and coherence of probability judgments. Based on the literature on number cognition, a first hypothesis was that those with lower numeracy would maintain a less linear use of the probability scale, contributing to overconfidence and nonlinear calibration curves. A second hypothesis was that also poorer acuity of the ANS would be associated with overconfidence and non-linearity. A third hypothesis, in line with dual-systems theory (e.g., Kahneman and Frederick, 2002) was that people higher in numeracy should have better access to the normative probability rules, allowing them to decrease the rate of conjunction fallacies. Data from 213 participants sampled from the Swedish population showed that: (i) in line with the first hypothesis, overconfidence and the linearity of the calibration curves were related to numeracy, where people higher in numeracy were well calibrated with zero overconfidence. (ii) ANS was not associated with overconfidence and non-linearity, disconfirming the second hypothesis. (iii) The rate of conjunction fallacies was slightly, but to a statistically significant degree decreased by numeracy, but still high at all numeracy levels. An unexpected finding was that participants with better ANS acuity gave more realistic estimates of their performance relative to others. PMID:25140163

  15. Probability forecast of the suspended sediment concentration using copula

    NASA Astrophysics Data System (ADS)

    Yu, Kun-xia; Li, Peng; Li, Zhanbin

    2016-04-01

    An approach for probability forecast of the suspended sediment loads is presented in our research. Probability forecast model is established based on the joint probability distribution of water discharge and suspended sediment concentration. The conditional distribution function of suspended sediment concentration given water discharge is evaluated provided the joint probability distribution between water discharge and suspended sediment concentration is constructed, and probability forecast of suspended sediment concentration is implemented in terms of conditional probability function. This approach is exemplified using annual data set of ten watersheds in the middle Yellow River which is characterized by heavy sediment. The three-parameter Gamma distribution is employed to fit the marginal distribution of annual water discharge and annual suspended sediment concentration, and the Gumbel copula can well describe the dependence structure between annual water discharge and annual suspended sediment concentration. Annual suspended sediment concentration estimated from the conditional distribution function with forecast probability of 50 percent agree better with the observed suspended sediment concentration values than the traditional sediment rating curve method given water discharge values. The overwhelming majority of observed suspended sediment concentration points lie between the forecast probability of 5 percent and 95 percent, which can be considered as the lower and upper 95th percent uncertainty bound of the predicted observation respectively. The results indicate that probability forecast on the basis of conditional distribution function is a potential alternative in suspended sediment and other hydrological variables estimation.

  16. Joint probability distributions for projection probabilities of random orthonormal states

    NASA Astrophysics Data System (ADS)

    Alonso, L.; Gorin, T.

    2016-04-01

    The quantum chaos conjecture applied to a finite dimensional quantum system implies that such a system has eigenstates that show similar statistical properties as the column vectors of random orthogonal or unitary matrices. Here, we consider the different probabilities for obtaining a specific outcome in a projective measurement, provided the system is in one of its eigenstates. We then give analytic expressions for the joint probability density for these probabilities, with respect to the ensemble of random matrices. In the case of the unitary group, our results can be applied, also, to the phenomenon of universal conductance fluctuations, where the same mathematical quantities describe partial conductances in a two-terminal mesoscopic scattering problem with a finite number of modes in each terminal.

  17. Realistic nucleon-nucleon interactions and the three-body electrodisintegration of 3H

    NASA Astrophysics Data System (ADS)

    Atti, C. Ciofi; Pace, E.; Salmè, G.

    1980-03-01

    The three-body variational wave functions resulting from two realistic nucleon-nucleon interactions featuring different deuteron D-wave probabilities have been used in the calculation of the three-body electrodisintegration of triton in the quasi-elastic region. The angular distributions of the coincidence cross section 3H(e,e'p)2n are found to depend sensitively upon the D-wave probability in the triton. In the case of the Reid soft core interaction a comparison of the T=1 spectral functions corresponding to the Faddeev and variational wave functions reveals an appreciably larger high momentum content in the latter. NUCLEAR REACTIONS 3H(e,e'p) 2n, calculated spectral function P(k,E2), angular distributions of coincidence process, quasi-elastic peak, energy-weighted sum rule. Three-body variational wave functions; Reid and RHEL-1 nucleon-nucleon interactions.

  18. Estimating Controller Intervention Probabilities for Optimized Profile Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Erzberger, Heinz; Huynh, Phu V.

    2011-01-01

    Simulations of arrival traffic at Dallas/Fort-Worth and Denver airports were conducted to evaluate incorporating scheduling and separation constraints into advisories that define continuous descent approaches. The goal was to reduce the number of controller interventions required to ensure flights maintain minimum separation distances of 5 nmi horizontally and 1000 ft vertically. It was shown that simply incorporating arrival meter fix crossing-time constraints into the advisory generation could eliminate over half of the all predicted separation violations and more than 80% of the predicted violations between two arrival flights. Predicted separation violations between arrivals and non-arrivals were 32% of all predicted separation violations at Denver and 41% at Dallas/Fort-Worth. A probabilistic analysis of meter fix crossing-time errors is included which shows that some controller interventions will still be required even when the predicted crossing-times of the advisories are set to add a 1 or 2 nmi buffer above the minimum in-trail separation of 5 nmi. The 2 nmi buffer was shown to increase average flight delays by up to 30 sec when compared to the 1 nmi buffer, but it only resulted in a maximum decrease in average arrival throughput of one flight per hour.

  19. Toward a realistic low-field SSC lattice

    SciTech Connect

    Heifets, S.

    1985-10-01

    Three six-fold lattices for 3 T superferric SSC have been generated at TAC. The program based on the first order canonical transformation was used to compare lattices. On this basis the realistic race-track lattices were generated.

  20. Entrepreneurial Education: A Realistic Alternative for Women and Minorities.

    ERIC Educational Resources Information Center

    Steward, James F.; Boyd, Daniel R.

    1989-01-01

    Entrepreneurial education is a valid, realistic occupational training alternative for minorities and women in business. Entrepreneurship requires that one become involved with those educational programs that contribute significantly to one's success. (Author)

  1. Student Work Experience: A Realistic Approach to Merchandising Education.

    ERIC Educational Resources Information Center

    Horridge, Patricia; And Others

    1980-01-01

    Relevant and realistic experiences are needed to prepare the student for a future career. Addresses the results of a survey of colleges and universities in the United States in regard to their student work experience (SWE) in fashion merchandising. (Author)

  2. Snell Envelope with Small Probability Criteria

    SciTech Connect

    Del Moral, Pierre Hu, Peng; Oudjane, Nadia

    2012-12-15

    We present a new algorithm to compute the Snell envelope in the specific case where the criteria to optimize is associated with a small probability or a rare event. This new approach combines the Stochastic Mesh approach of Broadie and Glasserman with a particle approximation scheme based on a specific change of measure designed to concentrate the computational effort in regions pointed out by the criteria. The theoretical analysis of this new algorithm provides non asymptotic convergence estimates. Finally, the numerical tests confirm the practical interest of this approach.

  3. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.

  4. Estimating Thermoelectric Water Use

    NASA Astrophysics Data System (ADS)

    Hutson, S. S.

    2012-12-01

    In 2009, the Government Accountability Office recommended that the U.S. Geological Survey (USGS) and Department of Energy-Energy Information Administration, (DOE-EIA) jointly improve their thermoelectric water-use estimates. Since then, the annual mandatory reporting forms returned by powerplant operators to DOE-EIA have been revised twice to improve the water data. At the same time, the USGS began improving estimation of withdrawal and consumption. Because of the variation in amount and quality of water-use data across powerplants, the USGS adopted a hierarchy of methods for estimating water withdrawal and consumptive use for the approximately 1,300 water-using powerplants in the thermoelectric sector. About 800 of these powerplants have generation and cooling data, and the remaining 500 have generation data only, or sparse data. The preferred method is to accept DOE-EIA data following validation. This is the traditional USGS method and the best method if all operators follow best practices for measurement and reporting. However, in 2010, fewer than 200 powerplants reported thermodynamically realistic values of both withdrawal and consumption. Secondly, water use was estimated using linked heat and water budgets for the first group of 800 plants, and for some of the other 500 powerplants where data were sufficient for at least partial modeling using plant characteristics, electric generation, and fuel use. Thermodynamics, environmental conditions, and characteristics of the plant and cooling system constrain both the amount of heat discharged to the environment and the share of this heat that drives evaporation. Heat and water budgets were used to define reasonable estimates of withdrawal and consumption, including likely upper and lower thermodynamic limits. These results were used to validate the reported values at the 800 plants with water-use data, and reported values were replaced by budget estimates at most of these plants. Thirdly, at plants without valid

  5. Children's understanding of posterior probability.

    PubMed

    Girotto, Vittorio; Gonzalez, Michel

    2008-01-01

    Do young children have a basic intuition of posterior probability? Do they update their decisions and judgments in the light of new evidence? We hypothesized that they can do so extensionally, by considering and counting the various ways in which an event may or may not occur. The results reported in this paper showed that from the age of five, children's decisions under uncertainty (Study 1) and judgments about random outcomes (Study 2) are correctly affected by posterior information. From the same age, children correctly revise their decisions in situations in which they face a single, uncertain event, produced by an intentional agent (Study 3). The finding that young children have some understanding of posterior probability supports the theory of naive extensional reasoning, and contravenes some pessimistic views of probabilistic reasoning, in particular the evolutionary claim that the human mind cannot deal with single-case probability. PMID:17391661

  6. GNSS integer ambiguity validation based on posterior probability

    NASA Astrophysics Data System (ADS)

    Wu, Zemin; Bian, Shaofeng

    2015-10-01

    GNSS integer ambiguity validation is considered to be a challenge task for decades. Several kinds of validation tests are developed and widely used in these years, but theoretical basis is their weakness. Ambiguity validation theoretically is an issue of hypothesis test. In the frame of Bayesian hypothesis testing, posterior probability is the canonical standard that statistical decision should be based on. In this contribution, (i) we derive the posterior probability of the fixed ambiguity based on the Bayesian principle and modify it for practice ambiguity validation. (ii) The optimal property of the posterior probability test is proved based on an extended Neyman-Pearson lemma. Since validation failure rate is the issue users most concerned about, (iii) we derive the failure rate upper bound of the posterior probability test, so the user can use the posterior probability test either in the fixed posterior probability or in the fixed failure rate way. Simulated as well as real observed data are used for experimental validations. The results show that (i) the posterior probability test is the most effective within the R-ratio test, difference test, ellipsoidal integer aperture test and posterior probability test, (ii) the posterior probability test is computational efficient and (iii) the failure rate estimation for posterior probability test is useful.

  7. Interference of probabilities in dynamics

    SciTech Connect

    Zak, Michail

    2014-08-15

    A new class of dynamical systems with a preset type of interference of probabilities is introduced. It is obtained from the extension of the Madelung equation by replacing the quantum potential with a specially selected feedback from the Liouville equation. It has been proved that these systems are different from both Newtonian and quantum systems, but they can be useful for modeling spontaneous collective novelty phenomena when emerging outputs are qualitatively different from the weighted sum of individual inputs. Formation of language and fast decision-making process as potential applications of the probability interference is discussed.

  8. Knowledge typology for imprecise probabilities.

    SciTech Connect

    Wilson, G. D.; Zucker, L. J.

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  9. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  10. The Probability Distribution of Daily Streamflow

    NASA Astrophysics Data System (ADS)

    Blum, A.; Vogel, R. M.

    2015-12-01

    Flow duration curves (FDCs) are a graphical illustration of the cumulative distribution of streamflow. Daily streamflows often range over many orders of magnitude, making it extremely challenging to find a probability distribution function (pdf) which can mimic the steady state or period of record FDC (POR-FDC). Median annual FDCs (MA-FDCs) describe the pdf of daily streamflow in a typical year. For POR- and MA-FDCs, Lmoment diagrams, visual assessments of FDCs and Quantile-Quantile probability plot correlation coefficients are used to evaluate goodness of fit (GOF) of candidate probability distributions. FDCs reveal that both four-parameter kappa (KAP) and three-parameter generalized Pareto (GP3) models result in very high GOF for the MA-FDC and a relatively lower GOF for POR-FDCs at over 500 rivers across the coterminous U.S. Physical basin characteristics, such as baseflow index as well as hydroclimatic indices such as the aridity index and the runoff ratio are found to be correlated with one of the shape parameters (kappa) of the KAP and GP3 pdfs. Our work also reveals several important areas for future research including improved parameter estimators for the KAP pdf, as well as increasing our understanding of the conditions which give rise to improved GOF of analytical pdfs to large samples of daily streamflows.

  11. Stretching Probability Explorations with Geoboards

    ERIC Educational Resources Information Center

    Wheeler, Ann; Champion, Joe

    2016-01-01

    Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

  12. GPS: Geometry, Probability, and Statistics

    ERIC Educational Resources Information Center

    Field, Mike

    2012-01-01

    It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…

  13. Some Surprising Probabilities from Bingo.

    ERIC Educational Resources Information Center

    Mercer, Joseph O.

    1993-01-01

    Investigates the probability of winning the largest prize at Bingo through a series of five simpler problems. Investigations are conducted with the aid of either BASIC computer programs, spreadsheets, or a computer algebra system such as Mathematica. Provides sample data tables to illustrate findings. (MDH)

  14. Probability Simulation in Middle School.

    ERIC Educational Resources Information Center

    Lappan, Glenda; Winter, M. J.

    1980-01-01

    Two simulations designed to teach probability to middle-school age pupils are presented. The first simulates the one-on-one foul shot simulation in basketball; the second deals with collecting a set of six cereal box prizes by buying boxes containing one toy each. (MP)

  15. Comments on quantum probability theory.

    PubMed

    Sloman, Steven

    2014-01-01

    Quantum probability theory (QP) is the best formal representation available of the most common form of judgment involving attribute comparison (inside judgment). People are capable, however, of judgments that involve proportions over sets of instances (outside judgment). Here, the theory does not do so well. I discuss the theory both in terms of descriptive adequacy and normative appropriateness.

  16. The Prediction of Spatial Aftershock Probabilities (PRESAP)

    NASA Astrophysics Data System (ADS)

    McCloskey, J.

    2003-12-01

    It is now widely accepted that the goal of deterministic earthquake prediction is unattainable in the short term and may even be forbidden by nonlinearity in the generating dynamics. This nonlinearity does not, however, preclude the estimation of earthquake probability and, in particular, how this probability might change in space and time; earthquake hazard estimation might be possible in the absence of earthquake prediction. Recently, there has been a major development in the understanding of stress triggering of earthquakes which allows accurate calculation of the spatial variation of aftershock probability following any large earthquake. Over the past few years this Coulomb stress technique (CST) has been the subject of intensive study in the geophysics literature and has been extremely successful in explaining the spatial distribution of aftershocks following several major earthquakes. The power of current micro-computers, the great number of local, telemeter seismic networks, the rapid acquisition of data from satellites coupled with the speed of modern telecommunications and data transfer all mean that it may be possible that these new techniques could be applied in a forward sense. In other words, it is theoretically possible today to make predictions of the likely spatial distribution of aftershocks in near-real-time following a large earthquake. Approximate versions of such predictions could be available within, say, 0.1 days after the mainshock and might be continually refined and updated over the next 100 days. The European Commission has recently provided funding for a project to assess the extent to which it is currently possible to move CST predictions into a practically useful time frame so that low-confidence estimates of aftershock probability might be made within a few hours of an event and improved in near-real-time, as data of better quality become available over the following day to tens of days. Specifically, the project aim is to assess the

  17. Earthquake probabilities: theoretical assessments and reality

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2013-12-01

    It is of common knowledge that earthquakes are complex phenomena which classification and sizing remain serious problems of the contemporary seismology. In general, their frequency-magnitude distribution exhibit power law scaling. This scaling differs significantly when different time and/or space domains are considered. At the scale of a particular earthquake rupture zone the frequency of similar size events is usually estimated to be about once in several hundred years. Evidently, contemporary seismology does not possess enough reported instrumental data for any reliable quantification of an earthquake probability at a given place of expected event. Regretfully, most of the state-of-the-art theoretical approaches to assess probability of seismic events are based on trivial (e.g. Poisson, periodic, etc) or, conversely, delicately-designed (e.g. STEP, ETAS, etc) models of earthquake sequences. Some of these models are evidently erroneous, some can be rejected by the existing statistics, and some are hardly testable in our life-time. Nevertheless such probabilistic counts including seismic hazard assessment and earthquake forecasting when used on practice eventually mislead to scientifically groundless advices communicated to decision makers and inappropriate decisions. As a result, the population of seismic regions continues facing unexpected risk and losses. The international project Global Earthquake Model (GEM) is on the wrong track, if it continues to base seismic risk estimates on the standard, mainly probabilistic, methodology to assess seismic hazard. It is generally accepted that earthquakes are infrequent, low-probability events. However, they keep occurring at earthquake-prone areas with 100% certainty. Given the expectation of seismic event once per hundred years, the daily probability of occurrence on a certain date may range from 0 to 100% depending on a choice of probability space (which is yet unknown and, therefore, made by a subjective lucky chance

  18. Precision robotic control of agricultural vehicles on realistic farm trajectories

    NASA Astrophysics Data System (ADS)

    Bell, Thomas

    High-precision "autofarming", or precise agricultural vehicle guidance, is rapidly becoming a reality thanks to increasing computing power and carrier-phase differential GPS ("CPDGPS") position and attitude sensors. Realistic farm trajectories will include not only rows but also arcs created by smoothly joining rows or path-planning algorithms, spirals for farming center-pivot irrigated fields, and curved trajectories dictated by nonlinear field boundaries. In addition, fields are often sloped, and accurate control may be required either on linear trajectories or on curved contours. A three-dimensional vehicle model which adapts to changing vehicle and ground conditions was created, and a low-order model for controller synthesis was extracted based on nominal conditions. The model was extended to include a towed implement. Experimentation showed that an extended Kalman filter could identify the vehicle's state in real-time. An approximation was derived for the additional positional uncertainty introduced by the noisy "lever-arm correction" necessary to translate the GPS position measurement at the roof antenna to the vehicle's control point on the ground; this approximation was then used to support the assertion that attitude measurement accuracy was as important to control point position measurement as the original position measurement accuracy at the GPS antenna. The low-order vehicle control model was transformed to polar coordinates for control on arcs and spirals. Experimental data showed that the tractor's control, point tracked an arc to within a -0.3 cm mean and a 3.4 cm standard deviation and a spiral to within a -0.2 cm mean and a 5.3 cm standard deviation. Cubic splines were used to describe curve trajectories, and a general expression for the time-rate-of-change of curve-related parameters was derived. Four vehicle control algorithms were derived for curve tracking: linear local-error control based on linearizing the vehicle about the curve's radius of

  19. Most probable paths in temporal weighted networks: An application to ocean transport

    NASA Astrophysics Data System (ADS)

    Ser-Giacomi, Enrico; Vasile, Ruggero; Hernández-García, Emilio; López, Cristóbal

    2015-07-01

    We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.

  20. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  1. Realistic Real-Time Outdoor Rendering in Augmented Reality

    PubMed Central

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  2. Realistic real-time outdoor rendering in augmented reality.

    PubMed

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  3. Speaker Verification in Realistic Noisy Environment in Forensic Science

    NASA Astrophysics Data System (ADS)

    Kamada, Toshiaki; Minematsu, Nobuaki; Osanai, Takashi; Makinae, Hisanori; Tanimoto, Masumi

    In forensic voice telephony speaker verification, we may be requested to identify a speaker in a very noisy environment, unlike the conditions in general research. In a noisy environment, we process speech first by clarifying it. However, the previous study of speaker verification from clarified speech did not yield satisfactory results. In this study, we experimented on speaker verification with clarification of speech in a noisy environment, and we examined the relationship between improving acoustic quality and speaker verification results. Moreover, experiments with realistic noise such as a crime prevention alarm and power supply noise was conducted, and speaker verification accuracy in a realistic environment was examined. We confirmed the validity of speaker verification with clarification of speech in a realistic noisy environment.

  4. Knot probabilities in random diagrams

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Chapman, Harrison; Mastin, Matt

    2016-10-01

    We consider a natural model of random knotting—choose a knot diagram at random from the finite set of diagrams with n crossings. We tabulate diagrams with 10 and fewer crossings and classify the diagrams by knot type, allowing us to compute exact probabilities for knots in this model. As expected, most diagrams with 10 and fewer crossings are unknots (about 78% of the roughly 1.6 billion 10 crossing diagrams). For these crossing numbers, the unknot fraction is mostly explained by the prevalence of ‘tree-like’ diagrams which are unknots for any assignment of over/under information at crossings. The data shows a roughly linear relationship between the log of knot type probability and the log of the frequency rank of the knot type, analogous to Zipf’s law for word frequency. The complete tabulation and all knot frequencies are included as supplementary data.

  5. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  6. Probability, Information and Statistical Physics

    NASA Astrophysics Data System (ADS)

    Kuzemsky, A. L.

    2016-03-01

    In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.

  7. Probability analysis of position errors using uncooled IR stereo camera

    NASA Astrophysics Data System (ADS)

    Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il

    2016-05-01

    This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.

  8. Realistic microwave breast models through T1-weighted 3-D MRI data.

    PubMed

    Tunçay, Ahmet Hakan; Akduman, Ibrahim

    2015-02-01

    In this paper we present an effective method for developing realistic numerical three-dimensional (3-D) microwave breast models of different shape, size, and tissue density. These models are especially convenient for microwave breast cancer imaging applications and numerical analysis of human breast-microwave interactions. As in the recent studies on this area, anatomical information of the breast tissue is collected from T1-weighted 3-D MRI data of different patients' in prone position. The method presented in this paper offers significant improvements including efficient noise reduction and tissue segmentation, nonlinear mapping of electromagnetic properties, realistically asymmetric phantom shape, and a realistic classification of breast phantoms. Our method contains a five-step approach where each MRI voxel is classified and mapped to the appropriate dielectric properties. In the first step, the MRI data are denoised by estimating and removing the bias field from each slice, after which the voxels are segmented into two main tissues as fibro-glandular and adipose. Using the distribution of the voxel intensities in MRI histogram, two nonlinear mapping functions are generated for dielectric permittivity and conductivity profiles, which allow each MRI voxel to map to its proper dielectric properties. Obtained dielectric profiles are then converted into 3-D numerical breast phantoms using several image processing techniques, including morphologic operations, filtering. Resultant phantoms are classified according to their adipose content, which is a critical parameter that affects penetration depth during microwave breast imaging.

  9. Bosonic condensates in realistic supersymmetric GUT cosmic strings

    NASA Astrophysics Data System (ADS)

    Allys, Erwan

    2016-04-01

    We study the realistic structure of F-term Nambu-Goto cosmic strings forming in a general supersymmetric Grand Unified Theory implementation, assuming standard hybrid inflation. Examining the symmetry breaking of the unification gauge group down to the Standard Model, we discuss the minimal field content necessary to describe abelian cosmic strings appearing at the end of inflation. We find that several fields will condense in most theories, questioning the plausible occurrence of associated currents (bosonic and fermionic). We perturbatively evaluate the modification of their energy per unit length due to the condensates. We provide a criterion for comparing the usual abelian Higgs approximation used in cosmology to realistic situations.

  10. The effects of realistic pancake solenoids on particle transport

    SciTech Connect

    Gu, X.; Okamura, M.; Pikin, A.; Fischer, W.; Luo, Y.

    2011-02-01

    Solenoids are widely used to transport or focus particle beams. Usually, they are assumed as being ideal solenoids with a high axial-symmetry magnetic field. Using the Vector Field Opera program, we modeled asymmetrical solenoids with realistic geometry defects, caused by finite conductor and current jumpers. Their multipole magnetic components were analyzed with the Fourier fit method; we present some possible optimized methods for them. We also discuss the effects of 'realistic' solenoids on low energy particle transport. The finding in this paper may be applicable to some lower energy particle transport system design.

  11. Objective Probability and Quantum Fuzziness

    NASA Astrophysics Data System (ADS)

    Mohrhoff, U.

    2009-02-01

    This paper offers a critique of the Bayesian interpretation of quantum mechanics with particular focus on a paper by Caves, Fuchs, and Schack containing a critique of the “objective preparations view” or OPV. It also aims to carry the discussion beyond the hardened positions of Bayesians and proponents of the OPV. Several claims made by Caves et al. are rebutted, including the claim that different pure states may legitimately be assigned to the same system at the same time, and the claim that the quantum nature of a preparation device cannot legitimately be ignored. Both Bayesians and proponents of the OPV regard the time dependence of a quantum state as the continuous dependence on time of an evolving state of some kind. This leads to a false dilemma: quantum states are either objective states of nature or subjective states of belief. In reality they are neither. The present paper views the aforesaid dependence as a dependence on the time of the measurement to whose possible outcomes the quantum state serves to assign probabilities. This makes it possible to recognize the full implications of the only testable feature of the theory, viz., the probabilities it assigns to measurement outcomes. Most important among these are the objective fuzziness of all relative positions and momenta and the consequent incomplete spatiotemporal differentiation of the physical world. The latter makes it possible to draw a clear distinction between the macroscopic and the microscopic. This in turn makes it possible to understand the special status of measurements in all standard formulations of the theory. Whereas Bayesians have written contemptuously about the “folly” of conjoining “objective” to “probability,” there are various reasons why quantum-mechanical probabilities can be considered objective, not least the fact that they are needed to quantify an objective fuzziness. But this cannot be appreciated without giving thought to the makeup of the world, which

  12. On the universality of knot probability ratios

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    Let pn denote the number of self-avoiding polygons of length n on a regular three-dimensional lattice, and let pn(K) be the number which have knot type K. The probability that a random polygon of length n has knot type K is pn(K)/pn and is known to decay exponentially with length (Sumners and Whittington 1988 J. Phys. A: Math. Gen. 21 1689-94, Pippenger 1989 Discrete Appl. Math. 25 273-8). Little is known rigorously about the asymptotics of pn(K), but there is substantial numerical evidence (Orlandini et al 1988 J. Phys. A: Math. Gen. 31 5953-67, Marcone et al 2007 Phys. Rev. E 75 41105, Rawdon et al 2008 Macromolecules 41 4444-51, Janse van Rensburg and Rechnitzer 2008 J. Phys. A: Math. Theor. 41 105002) that pn(K) grows as p_n(K) \\simeq C_K \\mu _\\emptyset ^n n^{\\alpha -3+N_K}, \\qquad as\\quad n \\rightarrow \\infty, where NK is the number of prime components of the knot type K. It is believed that the entropic exponent, α, is universal, while the exponential growth rate, μ∅, is independent of the knot type but varies with the lattice. The amplitude, CK, depends on both the lattice and the knot type. The above asymptotic form implies that the relative probability of a random polygon of length n having prime knot type K over prime knot type L is \\frac{p_n(K)/p_n}{p_n(L)/p_n} = \\frac{p_n(K)}{p_n(L)} \\simeq \\left[ \\frac{C_K}{C_L} \\right].\\\\[-8pt] In the thermodynamic limit this probability ratio becomes an amplitude ratio; it should be universal and depend only on the knot types K and L. In this communication we examine the universality of these probability ratios for polygons in the simple cubic, face-centred cubic and body-centred cubic lattices. Our results support the hypothesis that these are universal quantities. For example, we estimate that a long random polygon is approximately 28 times more likely to be a trefoil than be a figure-eight, independent of the underlying lattice, giving an estimate of the intrinsic entropy associated with knot

  13. What is preexisting strength? Predicting free association probabilities, similarity ratings, and cued recall probabilities.

    PubMed

    Nelson, Douglas L; Dyrdal, Gunvor M; Goodmon, Leilani B

    2005-08-01

    Measuring lexical knowledge poses a challenge to the study of the influence of preexisting knowledge on the retrieval of new memories. Many tasks focus on word pairs, but words are embedded in associative networks, so how should preexisting pair strength be measured? It has been measured by free association, similarity ratings, and co-occurrence statistics. Researchers interpret free association response probabilities as unbiased estimates of forward cue-to-target strength. In Study 1, analyses of large free association and extralist cued recall databases indicate that this interpretation is incorrect. Competitor and backward strengths bias free association probabilities, and as with other recall tasks, preexisting strength is described by a ratio rule. In Study 2, associative similarity ratings are predicted by forward and backward, but not by competitor, strength. Preexisting strength is not a unitary construct, because its measurement varies with method. Furthermore, free association probabilities predict extralist cued recall better than do ratings and co-occurrence statistics. The measure that most closely matches the criterion task may provide the best estimate of the identity of preexisting strength. PMID:16447386

  14. How Long Do the Dead Survive on the Road? Carcass Persistence Probability and Implications for Road-Kill Monitoring Surveys

    PubMed Central

    Santos, Sara M.; Carvalho, Filipe; Mira, António

    2011-01-01

    Background Road mortality is probably the best-known and visible impact of roads upon wildlife. Although several factors influence road-kill counts, carcass persistence time is considered the most important determinant underlying underestimates of road mortality. The present study aims to describe and model carcass persistence variability on the road for different taxonomic groups under different environmental conditions throughout the year; and also to assess the effect of sampling frequency on the relative variation in road-kill estimates registered within a survey. Methodology/Principal Findings Daily surveys of road-killed vertebrates were conducted over one year along four road sections with different traffic volumes. Survival analysis was then used to i) describe carcass persistence timings for overall and for specific animal groups; ii) assess optimal sampling designs according to research objectives; and iii) model the influence of road, animal and weather factors on carcass persistence probabilities. Most animal carcasses persisted on the road for the first day only, with some groups disappearing at very high rates. The advisable periodicity of road monitoring that minimizes bias in road mortality estimates is daily monitoring for bats (in the morning) and lizards (in the afternoon), daily monitoring for toads, small birds, small mammals, snakes, salamanders, and lagomorphs; 1 day-interval (alternate days) for large birds, birds of prey, hedgehogs, and freshwater turtles; and 2 day-interval for carnivores. Multiple factors influenced the persistence probabilities of vertebrate carcasses on the road. Overall, the persistence was much lower for small animals, on roads with lower traffic volumes, for carcasses located on road lanes, and during humid conditions and high temperatures during the wet season and dry seasons, respectively. Conclusion/Significance The guidance given here on monitoring frequencies is particularly relevant to provide conservation and

  15. Milky Way mass and potential recovery using tidal streams in a realistic halo

    SciTech Connect

    Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.; Johnston, Kathryn V.; Diemand, Jürg; Hogg, David W.

    2014-11-01

    We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potential parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.

  16. The probability of genetic parallelism and convergence in natural populations.

    PubMed

    Conte, Gina L; Arnegard, Matthew E; Peichel, Catherine L; Schluter, Dolph

    2012-12-22

    Genomic and genetic methods allow investigation of how frequently the same genes are used by different populations during adaptive evolution, yielding insights into the predictability of evolution at the genetic level. We estimated the probability of gene reuse in parallel and convergent phenotypic evolution in nature using data from published studies. The estimates are surprisingly high, with mean probabilities of 0.32 for genetic mapping studies and 0.55 for candidate gene studies. The probability declines with increasing age of the common ancestor of compared taxa, from about 0.8 for young nodes to 0.1-0.4 for the oldest nodes in our study. Probability of gene reuse is higher when populations begin from the same ancestor (genetic parallelism) than when they begin from divergent ancestors (genetic convergence). Our estimates are broadly consistent with genomic estimates of gene reuse during repeated adaptation to similar environments, but most genomic studies lack data on phenotypic traits affected. Frequent reuse of the same genes during repeated phenotypic evolution suggests that strong biases and constraints affect adaptive evolution, resulting in changes at a relatively small subset of available genes. Declines in the probability of gene reuse with increasing age suggest that these biases diverge with time.

  17. The magnitude-redshift relation in a realistic inhomogeneous universe

    SciTech Connect

    Hada, Ryuichiro; Futamase, Toshifumi E-mail: tof@astr.tohoku.ac.jp

    2014-12-01

    The light rays from a source are subject to a local inhomogeneous geometry generated by inhomogeneous matter distribution as well as the existence of collapsed objects. In this paper we investigate the effect of inhomogeneities and the existence of collapsed objects on the propagation of light rays and evaluate changes in the magnitude-redshift relation from the standard relationship found in a homogeneous FRW universe. We give the expression of the correlation function and the variance for the perturbation of apparent magnitude, and calculate it numerically by using the non-linear matter power spectrum. We use the lognormal probability distribution function for the density contrast and spherical collapse model to truncate the power spectrum in order to estimate the blocking effect by collapsed objects. We find that the uncertainties in Ω{sub m} is ∼ 0.02, and that of w is ∼ 0.04 . We also discuss a possible method to extract these effects from real data which contains intrinsic ambiguities associated with the absolute magnitude.

  18. Ensemble estimators for multivariate entropy estimation

    PubMed Central

    Sricharan, Kumar; Wei, Dennis; Hero, Alfred O.

    2015-01-01

    The problem of estimation of density functionals like entropy and mutual information has received much attention in the statistics and information theory communities. A large class of estimators of functionals of the probability density suffer from the curse of dimensionality, wherein the mean squared error (MSE) decays increasingly slowly as a function of the sample size T as the dimension d of the samples increases. In particular, the rate is often glacially slow of order O(T−γ/d), where γ > 0 is a rate parameter. Examples of such estimators include kernel density estimators, k-nearest neighbor (k-NN) density estimators, k-NN entropy estimators, intrinsic dimension estimators and other examples. In this paper, we propose a weighted affine combination of an ensemble of such estimators, where optimal weights can be chosen such that the weighted estimator converges at a much faster dimension invariant rate of O(T−1). Furthermore, we show that these optimal weights can be determined by solving a convex optimization problem which can be performed offline and does not require training data. We illustrate the superior performance of our weighted estimator for two important applications: (i) estimating the Panter-Dite distortion-rate factor and (ii) estimating the Shannon entropy for testing the probability distribution of a random sample. PMID:25897177

  19. The Black Hole Formation Probability

    NASA Astrophysics Data System (ADS)

    Clausen, Drew R.; Piro, Anthony; Ott, Christian D.

    2015-01-01

    A longstanding question in stellar evolution is which massive stars produce black holes (BHs) rather than neutron stars (NSs) upon death. It has been common practice to assume that a given zero-age main sequence (ZAMS) mass star (and perhaps a given metallicity) simply produces either an NS or a BH, but this fails to account for a myriad of other variables that may effect this outcome, such as spin, binarity, or even stochastic differences in the stellar structure near core collapse. We argue that instead a probabilistic description of NS versus BH formation may be better suited to account for the current uncertainties in understanding how massive stars die. Using the observed BH mass distribution from Galactic X-ray binaries, we investigate the probability that a star will make a BH as a function of its ZAMS mass. Although the shape of the black hole formation probability function is poorly constrained by current measurements, we believe that this framework is an important new step toward better understanding BH formation. We also consider some of the implications of this probability distribution, from its impact on the chemical enrichment from massive stars, to its connection with the structure of the core at the time of collapse, to the birth kicks that black holes receive. A probabilistic description of BH formation will be a useful input for future population synthesis studies that are interested in the formation of X-ray binaries, the nature and event rate of gravitational wave sources, and answering questions about chemical enrichment.

  20. Developing Skills: Realistic Work Environments in Further Education. FEDA Reports.

    ERIC Educational Resources Information Center

    Armstrong, Paul; Hughes, Maria

    To establish the prevalence and perceived value of realistic work environments (RWEs) in colleges and their use as learning resources, all further education (FE) sector colleges in Great Britain were surveyed in the summer of 1998. Of 175 colleges that responded to 2 questionnaires for senior college managers and RWE managers, 127 had at least 1…