Science.gov

Sample records for realistic probability estimates

  1. Realistic Probability Estimates For Destructive Overpressure Events In Heated Center Wing Tanks Of Commercial Jet Aircraft

    SciTech Connect

    Alvares, N; Lambert, H

    2007-02-07

    The Federal Aviation Administration (FAA) identified 17 accidents that may have resulted from fuel tank explosions on commercial aircraft from 1959 to 2001. Seven events involved JP 4 or JP 4/Jet A mixtures that are no longer used for commercial aircraft fuel. The remaining 10 events involved Jet A or Jet A1 fuels that are in current use by the commercial aircraft industry. Four fuel tank explosions occurred in center wing tanks (CWTs) where on-board appliances can potentially transfer heat to the tank. These tanks are designated as ''Heated Center Wing Tanks'' (HCWT). Since 1996, the FAA has significantly increased the rate at which it has mandated airworthiness directives (ADs) directed at elimination of ignition sources. This effort includes the adoption, in 2001, of Special Federal Aviation Regulation 88 of 14 CFR part 21 (SFAR 88 ''Fuel Tank System Fault Tolerance Evaluation Requirements''). This paper addresses SFAR 88 effectiveness in reducing HCWT ignition source probability. Our statistical analysis, relating the occurrence of both on-ground and in-flight HCWT explosions to the cumulative flight hours of commercial passenger aircraft containing HCWT's reveals that the best estimate of HCWT explosion rate is 1 explosion in 1.4 x 10{sup 8} flight hours. Based on an analysis of SFAR 88 by Sandia National Laboratories and our independent analysis, SFAR 88 reduces current risk of historical HCWT explosion by at least a factor of 10, thus meeting an FAA risk criteria of 1 accident in billion flight hours. This paper also surveys and analyzes parameters for Jet A fuel ignition in HCWT's. Because of the paucity of in-flight HCWT explosions, we conclude that the intersection of the parameters necessary and sufficient to result in an HCWT explosion with sufficient overpressure to rupture the HCWT is extremely rare.

  2. ESTIMATION OF AGE TRANSITION PROBABILITIES.

    ERIC Educational Resources Information Center

    ZINTER, JUDITH R.

    THIS NOTE DESCRIBES THE PROCEDURES USED IN DETERMINING DYNAMOD II AGE TRANSITION MATRICES. A SEPARATE MATRIX FOR EACH SEX-RACE GROUP IS DEVELOPED. THESE MATRICES WILL BE USED AS AN AID IN ESTIMATING THE TRANSITION PROBABILITIES IN THE LARGER DYNAMOD II MATRIX RELATING AGE TO OCCUPATIONAL CATEGORIES. THREE STEPS WERE USED IN THE PROCEDURE--(1)…

  3. Dynamic probability estimator for machine learning.

    PubMed

    Starzyk, Janusz A; Wang, Feng

    2004-03-01

    An efficient algorithm for dynamic estimation of probabilities without division on unlimited number of input data is presented. The method estimates probabilities of the sampled data from the raw sample count, while keeping the total count value constant. Accuracy of the estimate depends on the counter size, rather than on the total number of data points. Estimator follows variations of the incoming data probability within a fixed window size, without explicit implementation of the windowing technique. Total design area is very small and all probabilities are estimated concurrently. Dynamic probability estimator was implemented using a programmable gate array from Xilinx. The performance of this implementation is evaluated in terms of the area efficiency and execution time. This method is suitable for the highly integrated design of artificial neural networks where a large number of dynamic probability estimators can work concurrently. PMID:15384523

  4. Cold and hot cognition: quantum probability theory and realistic psychological modeling.

    PubMed

    Corr, Philip J

    2013-06-01

    Typically, human decision making is emotionally "hot" and does not conform to "cold" classical probability (CP) theory. As quantum probability (QP) theory emphasises order, context, superimposition states, and nonlinear dynamic effects, one of its major strengths may be its power to unify formal modeling and realistic psychological theory (e.g., information uncertainty, anxiety, and indecision, as seen in the Prisoner's Dilemma). PMID:23673029

  5. Point estimates for probability moments

    PubMed Central

    Rosenblueth, Emilio

    1975-01-01

    Given a well-behaved real function Y of a real random variable X and the first two or three moments of X, expressions are derived for the moments of Y as linear combinations of powers of the point estimates y(x+) and y(x-), where x+ and x- are specific values of X. Higher-order approximations and approximations for discontinuous Y using more point estimates are also given. Second-moment approximations are generalized to the case when Y is a function of several variables. PMID:16578731

  6. Conflict Probability Estimation for Free Flight

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Heinz

    1996-01-01

    The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.

  7. Radiation Dose Estimation Using Realistic Postures with PIMAL

    SciTech Connect

    Akkurt, Hatice; Wiarda, Dorothea; Eckerman, Keith F

    2010-12-01

    For correct radiation dose assessment, it is important to take the posture into account. A computational phantom with moving arms and legs was previously developed to address this need. Further, an accompanying graphical user interface (GUI), called PIMAL, was developed to enable dose estimation using realistic postures in a user-friendly manner such that the analyst's time could be substantially reduced. The importance of the posture for correct dose estimation has been demonstrated with a few case studies in earlier analyses. The previous version of PIMAL was somewhat limited in its features (i.e., it contained only a hermaphrodite phantom model and allowed only isotropic source definition). Currently GUI is being further enhanced by incorporating additional phantom models, improving the features, and increasing the user friendliness in general. This paper describes recent updates to the PIMAL software. In this summary recent updates to the PIMAL software, which aims to perform radiation transport simulations for phantom models in realistic postures in a user-friendly manner, are described. In future work additional phantom models, including hybrid phantom models, will be incorporated. In addition to further enhancements, a library of input files for the case studies that have been analyzed to date will be included in the PIMAL.

  8. Model estimates hurricane wind speed probabilities

    NASA Astrophysics Data System (ADS)

    Mumane, Richard J.; Barton, Chris; Collins, Eric; Donnelly, Jeffrey; Eisner, James; Emanuel, Kerry; Ginis, Isaac; Howard, Susan; Landsea, Chris; Liu, Kam-biu; Malmquist, David; McKay, Megan; Michaels, Anthony; Nelson, Norm; O Brien, James; Scott, David; Webb, Thompson, III

    In the United States, intense hurricanes (category 3, 4, and 5 on the Saffir/Simpson scale) with winds greater than 50 m s -1 have caused more damage than any other natural disaster [Pielke and Pielke, 1997]. Accurate estimates of wind speed exceedance probabilities (WSEP) due to intense hurricanes are therefore of great interest to (re)insurers, emergency planners, government officials, and populations in vulnerable coastal areas.The historical record of U.S. hurricane landfall is relatively complete only from about 1900, and most model estimates of WSEP are derived from this record. During the 1899-1998 period, only two category-5 and 16 category-4 hurricanes made landfall in the United States. The historical record therefore provides only a limited sample of the most intense hurricanes.

  9. Distributed estimation and joint probabilities estimation by entropy model

    NASA Astrophysics Data System (ADS)

    Fassinut-Mombot, B.; Zribi, M.; Choquel, J. B.

    2001-05-01

    This paper proposes the use of Entropy Model for distributed estimation system. Entropy Model is an entropic technique based on the minimization of conditional entropy and developed for Multi-Source/Sensor Information Fusion (MSIF) problem. We address the problem of distributed estimation from independent observations involving multiple sources, i.e., the problem of estimating or selecting one of several identity declaration, or hypothesis concerning an observed object. Two problems are considered in Entropy Model. In order to fuse observations using Entropy Model, it is necessary to know or estimate the conditional probabilities and by equivalent the joint probabilities. A common practice for estimating probability distributions from data when nothing is known (without a priori knowledge), one should prefer distributions that are as uniform as possible, that is, have maximal entropy. Next, the problem of combining (or ``fusing'') observations relating to identity hypotheses and selecting the most appropriate hypothesis about the object's identity is addressed. Much future work remains, but the results indicate that Entropy Model is a promising technique for distributed estimation. .

  10. Estimating flood exceedance probabilities in estuarine regions

    NASA Astrophysics Data System (ADS)

    Westra, Seth; Leonard, Michael

    2016-04-01

    Flood events in estuarine regions can arise from the interaction of extreme rainfall and storm surge. Determining flood level exceedance probabilities in these regions is complicated by the dependence of these processes for extreme events. A comprehensive study of tide and rainfall gauges along the Australian coastline was conducted to determine the dependence of these extremes using a bivariate logistic threshold-excess model. The dependence strength is shown to vary as a function of distance over many hundreds of kilometres indicating that the dependence arises due to synoptic scale meteorological forcings. It is also shown to vary as a function of storm burst duration, time lag between the extreme rainfall and the storm surge event. The dependence estimates are then used with a bivariate design variable method to determine flood risk in estuarine regions for a number of case studies. Aspects of the method demonstrated in the case studies include, the resolution and range of the hydraulic response table, fitting of probability distributions, computational efficiency, uncertainty, potential variation in marginal distributions due to climate change, and application to two dimensional output from hydraulic models. Case studies are located on the Swan River (Western Australia), Nambucca River and Hawkesbury Nepean River (New South Wales).

  11. Estimating the Probability of Negative Events

    ERIC Educational Resources Information Center

    Harris, Adam J. L.; Corner, Adam; Hahn, Ulrike

    2009-01-01

    How well we are attuned to the statistics of our environment is a fundamental question in understanding human behaviour. It seems particularly important to be able to provide accurate assessments of the probability with which negative events occur so as to guide rational choice of preventative actions. One question that arises here is whether or…

  12. Estimation of transition probabilities of credit ratings

    NASA Astrophysics Data System (ADS)

    Peng, Gan Chew; Hin, Pooi Ah

    2015-12-01

    The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.

  13. Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  14. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  15. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  16. PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties

    SciTech Connect

    Caron, D. S.; Browne, E.; Norman, E. B.

    2009-08-21

    The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given below.

  17. A Bayesian Estimator of Protein-Protein Association Probabilities

    SciTech Connect

    Gilmore, Jason M.; Auberry, Deanna L.; Sharp, Julia L.; White, Amanda M.; Anderson, Kevin K.; Daly, Don S.

    2008-07-01

    The Bayesian Estimator of Protein-Protein Association Probabilities (BEPro3) is a software tool for estimating probabilities of protein-protein association between bait and prey protein pairs using data from multiple-bait, multiple-replicate, protein pull-down LC-MS assay experiments. BEPro3 is open source software that runs on both Windows XP and Mac OS 10.4 or newer versions, and is freely available from http://www.pnl.gov/statistics/BEPro3.

  18. Naive Probability: Model-Based Estimates of Unique Events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Philip N

    2015-08-01

    We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P(A), P(B), P(A and B), disjunctions: P(A), P(B), P(A or B or both), and conditional probabilities P(A), P(B), P(B|A). They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. PMID:25363706

  19. Probability estimation in arithmetic and adaptive-Huffman entropy coders.

    PubMed

    Duttweiler, D L; Chamzas, C

    1995-01-01

    Entropy coders, such as Huffman and arithmetic coders, achieve compression by exploiting nonuniformity in the probabilities under which a random variable to be coded takes on its possible values. Practical realizations generally require running adaptive estimates of these probabilities. An analysis of the relationship between estimation quality and the resulting coding efficiency suggests a particular scheme, dubbed scaled-count, for obtaining such estimates. It can optimally balance estimation accuracy against a need for rapid response to changing underlying statistics. When the symbols being coded are from a binary alphabet, simple hardware and software implementations requiring almost no computation are possible. A scaled-count adaptive probability estimator of the type described in this paper is used in the arithmetic coder of the JBIG and JPEG image coding standards. PMID:18289975

  20. Interval estimation of small tail probabilities - applications in food safety.

    PubMed

    Kedem, Benjamin; Pan, Lemeng; Zhou, Wen; Coelho, Carlos A

    2016-08-15

    Often in food safety and bio-surveillance it is desirable to estimate the probability that a contaminant or a function thereof exceeds an unsafe high threshold. The probability or chance in question is very small. To estimate such a probability, we need information about large values. In many cases, the data do not contain information about exceedingly large contamination levels, which ostensibly renders the problem insolvable. A solution is suggested whereby more information about small tail probabilities are obtained by combining the real data with computer-generated data repeatedly. This method provides short yet reliable interval estimates based on moderately large samples. An illustration is provided in terms of lead exposure data. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26891189

  1. 27% Probable: Estimating Whether or Not Large Numbers Are Prime.

    ERIC Educational Resources Information Center

    Bosse, Michael J.

    2001-01-01

    This brief investigation exemplifies such considerations by relating concepts from number theory, set theory, probability, logic, and calculus. Satisfying the call for students to acquire skills in estimation, the following technique allows one to "immediately estimate" whether or not a number is prime. (MM)

  2. Estimation of State Transition Probabilities: A Neural Network Model

    NASA Astrophysics Data System (ADS)

    Saito, Hiroshi; Takiyama, Ken; Okada, Masato

    2015-12-01

    Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.

  3. Estimating the empirical probability of submarine landslide occurrence

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2010-01-01

    The empirical probability for the occurrence of submarine landslides at a given location can be estimated from age dates of past landslides. In this study, tools developed to estimate earthquake probability from paleoseismic horizons are adapted to estimate submarine landslide probability. In both types of estimates, one has to account for the uncertainty associated with age-dating individual events as well as the open time intervals before and after the observed sequence of landslides. For observed sequences of submarine landslides, we typically only have the age date of the youngest event and possibly of a seismic horizon that lies below the oldest event in a landslide sequence. We use an empirical Bayes analysis based on the Poisson-Gamma conjugate prior model specifically applied to the landslide probability problem. This model assumes that landslide events as imaged in geophysical data are independent and occur in time according to a Poisson distribution characterized by a rate parameter λ. With this method, we are able to estimate the most likely value of λ and, importantly, the range of uncertainty in this estimate. Examples considered include landslide sequences observed in the Santa Barbara Channel, California, and in Port Valdez, Alaska. We confirm that given the uncertainties of age dating that landslide complexes can be treated as single events by performing statistical test of age dates representing the main failure episode of the Holocene Storegga landslide complex.

  4. Bayesian Estimator of Protein-Protein Association Probabilities

    Energy Science and Technology Software Center (ESTSC)

    2008-05-28

    The Bayesian Estimator of Protein-Protein Association Probabilities (BEPro3) is a software tool for estimating probabilities of protein-protein association between bait and prey protein pairs using data from multiple-bait, multiple-replicate, protein LC-MS/MS affinity isolation experiments. BEPro3 is public domain software, has been tested on Windows XP and version 10.4 or newer of the Mac OS 10.4, and is freely available. A user guide, example dataset with analysis and additional documentation are included with the BEPro3 download.

  5. An application of recurrent nets to phone probability estimation.

    PubMed

    Robinson, A J

    1994-01-01

    This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation. PMID:18267798

  6. Estimating Second Order Probability Beliefs from Subjective Survival Data

    PubMed Central

    Hudomiet, Péter; Willis, Robert J.

    2013-01-01

    Based on subjective survival probability questions in the Health and Retirement Study (HRS), we use an econometric model to estimate the determinants of individual-level uncertainty about personal longevity. This model is built around the modal response hypothesis (MRH), a mathematical expression of the idea that survey responses of 0%, 50%, or 100% to probability questions indicate a high level of uncertainty about the relevant probability. We show that subjective survival expectations in 2002 line up very well with realized mortality of the HRS respondents between 2002 and 2010. We show that the MRH model performs better than typically used models in the literature of subjective probabilities. Our model gives more accurate estimates of low probability events and it is able to predict the unusually high fraction of focal 0%, 50%, and 100% answers observed in many data sets on subjective probabilities. We show that subjects place too much weight on parents’ age at death when forming expectations about their own longevity, whereas other covariates such as demographics, cognition, personality, subjective health, and health behavior are under weighted. We also find that less educated people, smokers, and women have less certain beliefs, and recent health shocks increase uncertainty about survival, too. PMID:24403866

  7. Simulation and Estimation of Extreme Quantiles and Extreme Probabilities

    SciTech Connect

    Guyader, Arnaud; Hengartner, Nicolas; Matzner-Lober, Eric

    2011-10-15

    Let X be a random vector with distribution {mu} on Double-Struck-Capital-R {sup d} and {Phi} be a mapping from Double-Struck-Capital-R {sup d} to Double-Struck-Capital-R . That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.

  8. Using Correlation to Compute Better Probability Estimates in Plan Graphs

    NASA Technical Reports Server (NTRS)

    Bryce, Daniel; Smith, David E.

    2006-01-01

    Plan graphs are commonly used in planning to help compute heuristic "distance" estimates between states and goals. A few authors have also attempted to use plan graphs in probabilistic planning to compute estimates of the probability that propositions can be achieved and actions can be performed. This is done by propagating probability information forward through the plan graph from the initial conditions through each possible action to the action effects, and hence to the propositions at the next layer of the plan graph. The problem with these calculations is that they make very strong independence assumptions - in particular, they usually assume that the preconditions for each action are independent of each other. This can lead to gross overestimates in probability when the plans for those preconditions interfere with each other. It can also lead to gross underestimates of probability when there is synergy between the plans for two or more preconditions. In this paper we introduce a notion of the binary correlation between two propositions and actions within a plan graph, show how to propagate this information within a plan graph, and show how this improves probability estimates for planning. This notion of correlation can be thought of as a continuous generalization of the notion of mutual exclusion (mutex) often used in plan graphs. At one extreme (correlation=0) two propositions or actions are completely mutex. With correlation = 1, two propositions or actions are independent, and with correlation > 1, two propositions or actions are synergistic. Intermediate values can and do occur indicating different degrees to which propositions and action interfere or are synergistic. We compare this approach with another recent approach by Bryce that computes probability estimates using Monte Carlo simulation of possible worlds in plan graphs.

  9. Failure probability estimate of Type 304 stainless steel piping

    SciTech Connect

    Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.; Mehta, H.S.; Ranganath, S.; General Electric Co., San Jose, CA )

    1989-01-01

    The primary source of in-service degradation of the SRS production reactor process water piping is intergranular stress corrosion cracking (IGSCC). IGSCC has occurred in a limited number of weld heat affected zones, areas known to be susceptible to IGSCC. A model has been developed to combine crack growth rates, crack size distributions, in-service examination reliability estimates and other considerations to estimate the pipe large-break frequency. This frequency estimates the probability that an IGSCC crack will initiate, escape detection by ultrasonic (UT) examination, and grow to instability prior to extending through-wall and being detected by the sensitive leak detection system. These events are combined as the product of four factors: (1) the probability that a given weld heat affected zone contains IGSCC, (2) the conditional probability, given the presence of IGSCC, that the cracking will escape detection during UT examination, (3) the conditional probability, given a crack escapes detection by UT, that it will not grow through-wall and be detected by leakage, and (4) the conditional probability, given a crack is not detected by leakage, that it grows to instability prior to the next UT exam. These four factors estimate the occurrence of several conditions that must coexist in order for a crack to lead to a large break of the process water piping. When evaluated for the SRS production reactors, they produce an extremely low break frequency. The objective of this paper is to present the assumptions, methodology, results and conclusions of a probabilistic evaluation for the direct failure of the primary coolant piping resulting from normal operation and seismic loads. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break.

  10. Revising probability estimates: Why increasing likelihood means increasing impact.

    PubMed

    Maglio, Sam J; Polman, Evan

    2016-08-01

    Forecasted probabilities rarely stay the same for long. Instead, they are subject to constant revision-moving upward or downward, uncertain events become more or less likely. Yet little is known about how people interpret probability estimates beyond static snapshots, like a 30% chance of rain. Here, we consider the cognitive, affective, and behavioral consequences of revisions to probability forecasts. Stemming from a lay belief that revisions signal the emergence of a trend, we find in 10 studies (comprising uncertain events such as weather, climate change, sex, sports, and wine) that upward changes to event-probability (e.g., increasing from 20% to 30%) cause events to feel less remote than downward changes (e.g., decreasing from 40% to 30%), and subsequently change people's behavior regarding those events despite the revised event-probabilities being the same. Our research sheds light on how revising the probabilities for future events changes how people manage those uncertain events. (PsycINFO Database Record PMID:27281350

  11. Estimating transition probabilities in unmarked populations --entropy revisited

    USGS Publications Warehouse

    Cooch, E.G.; Link, W.A.

    1999-01-01

    The probability of surviving and moving between 'states' is of great interest to biologists. Robust estimation of these transitions using multiple observations of individually identifiable marked individuals has received considerable attention in recent years. However, in some situations, individuals are not identifiable (or have a very low recapture rate), although all individuals in a sample can be assigned to a particular state (e.g. breeding or non-breeding) without error. In such cases, only aggregate data (number of individuals in a given state at each occasion) are available. If the underlying matrix of transition probabilities does not vary through time and aggregate data are available for several time periods, then it is possible to estimate these parameters using least-squares methods. Even when such data are available, this assumption of stationarity will usually be deemed overly restrictive and, frequently, data will only be available for two time periods. In these cases, the problem reduces to estimating the most likely matrix (or matrices) leading to the observed frequency distribution of individuals in each state. An entropy maximization approach has been previously suggested. In this paper, we show that the entropy approach rests on a particular limiting assumption, and does not provide estimates of latent population parameters (the transition probabilities), but rather predictions of realized rates.

  12. Estimating probable flaw distributions in PWR steam generator tubes

    SciTech Connect

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regarding uncertainties and assumptions in the data and analyses.

  13. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  14. Probability Density and CFAR Threshold Estimation for Hyperspectral Imaging

    SciTech Connect

    Clark, G A

    2004-09-21

    The work reported here shows the proof of principle (using a small data set) for a suite of algorithms designed to estimate the probability density function of hyperspectral background data and compute the appropriate Constant False Alarm Rate (CFAR) matched filter decision threshold for a chemical plume detector. Future work will provide a thorough demonstration of the algorithms and their performance with a large data set. The LASI (Large Aperture Search Initiative) Project involves instrumentation and image processing for hyperspectral images of chemical plumes in the atmosphere. The work reported here involves research and development on algorithms for reducing the false alarm rate in chemical plume detection and identification algorithms operating on hyperspectral image cubes. The chemical plume detection algorithms to date have used matched filters designed using generalized maximum likelihood ratio hypothesis testing algorithms [1, 2, 5, 6, 7, 12, 10, 11, 13]. One of the key challenges in hyperspectral imaging research is the high false alarm rate that often results from the plume detector [1, 2]. The overall goal of this work is to extend the classical matched filter detector to apply Constant False Alarm Rate (CFAR) methods to reduce the false alarm rate, or Probability of False Alarm P{sub FA} of the matched filter [4, 8, 9, 12]. A detector designer is interested in minimizing the probability of false alarm while simultaneously maximizing the probability of detection P{sub D}. This is summarized by the Receiver Operating Characteristic Curve (ROC) [10, 11], which is actually a family of curves depicting P{sub D} vs. P{sub FA}parameterized by varying levels of signal to noise (or clutter) ratio (SNR or SCR). Often, it is advantageous to be able to specify a desired P{sub FA} and develop a ROC curve (P{sub D} vs. decision threshold r{sub 0}) for that case. That is the purpose of this work. Specifically, this work develops a set of algorithms and MATLAB

  15. Estimation of the probability of success in petroleum exploration

    USGS Publications Warehouse

    Davis, J.C.

    1977-01-01

    A probabilistic model for oil exploration can be developed by assessing the conditional relationship between perceived geologic variables and the subsequent discovery of petroleum. Such a model includes two probabilistic components, the first reflecting the association between a geologic condition (structural closure, for example) and the occurrence of oil, and the second reflecting the uncertainty associated with the estimation of geologic variables in areas of limited control. Estimates of the conditional relationship between geologic variables and subsequent production can be found by analyzing the exploration history of a "training area" judged to be geologically similar to the exploration area. The geologic variables are assessed over the training area using an historical subset of the available data, whose density corresponds to the present control density in the exploration area. The success or failure of wells drilled in the training area subsequent to the time corresponding to the historical subset provides empirical estimates of the probability of success conditional upon geology. Uncertainty in perception of geological conditions may be estimated from the distribution of errors made in geologic assessment using the historical subset of control wells. These errors may be expressed as a linear function of distance from available control. Alternatively, the uncertainty may be found by calculating the semivariogram of the geologic variables used in the analysis: the two procedures will yield approximately equivalent results. The empirical probability functions may then be transferred to the exploration area and used to estimate the likelihood of success of specific exploration plays. These estimates will reflect both the conditional relationship between the geological variables used to guide exploration and the uncertainty resulting from lack of control. The technique is illustrated with case histories from the mid-Continent area of the U.S.A. ?? 1977 Plenum

  16. Estimating transition probabilities among everglades wetland communities using multistate models

    USGS Publications Warehouse

    Hotaling, A.S.; Martin, J.; Kitchens, W.M.

    2009-01-01

    In this study we were able to provide the first estimates of transition probabilities of wet prairie and slough vegetative communities in Water Conservation Area 3A (WCA3A) of the Florida Everglades and to identify the hydrologic variables that determine these transitions. These estimates can be used in management models aimed at restoring proportions of wet prairie and slough habitats to historical levels in the Everglades. To determine what was driving the transitions between wet prairie and slough communities we evaluated three hypotheses: seasonality, impoundment, and wet and dry year cycles using likelihood-based multistate models to determine the main driver of wet prairie conversion in WCA3A. The most parsimonious model included the effect of wet and dry year cycles on vegetative community conversions. Several ecologists have noted wet prairie conversion in southern WCA3A but these are the first estimates of transition probabilities among these community types. In addition, to being useful for management of the Everglades we believe that our framework can be used to address management questions in other ecosystems. ?? 2009 The Society of Wetland Scientists.

  17. Image-based camera motion estimation using prior probabilities

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC

  18. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  19. Failure probability estimate of Type 304 stainless steel piping

    SciTech Connect

    Daugherty, W L; Awadalla, N G; Sindelar, R L; Mehta, H S; Ranganath, S; General Electric Co., San Jose, CA )

    1989-01-01

    The primary source of in-service degradation of the SRS production reactor process water piping is intergranular stress corrosion cracking (IGSCC). IGSCC has occurred in a limited number of weld heat affected zones, areas known to be susceptible to IGSCC. A model has been developed to combine crack growth rates, crack size distributions, in-service examination reliability estimates and other considerations to estimate the pipe large-break frequency. This frequency estimates the probability that an IGSCC crack will initiate, escape detection by ultrasonic (UT) examination, and grow to instability prior to extending through-wall and being detected by the sensitive leak detection system. These four factors estimate the occurrence of several conditions that must coexist in order for a crack to lead to a large break of the process piping. When evaluated for the SRS production reactors, they produce an extremely low break frequency. The objective of this paper is to present the assumptions, methodology, results and conclusion of a probabilistic evaluation for the direct failure of the primary coolant piping resulting from normal operation and seismic loads. This evaluation was performed to support the ongoing PRA effort and to complement deterministic analyses addressing the credibility of a double-ended guillotine break. 5 refs., 2 figs.

  20. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    NASA Astrophysics Data System (ADS)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  1. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    PubMed

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days. PMID:23771956

  2. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This

  3. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  4. Estimating the probability for major gene Alzheimer disease

    SciTech Connect

    Farrer, L.A. Boston Univ. School of Public Health, Boston, MA ); Cupples, L.A. )

    1994-02-01

    Alzheimer disease (AD) is a neuropsychiatric illness caused by multiple etiologies. Prediction of whether AD is genetically based in a given family is problematic because of censoring bias among unaffected relatives as a consequence of the late onset of the disorder, diagnostic uncertainties, heterogeneity, and limited information in a single family. The authors have developed a method based on Bayesian probability to compute values for a continuous variable that ranks AD families as having a major gene form of AD (MGAD). In addition, they have compared the Bayesian method with a maximum-likelihood approach. These methods incorporate sex- and age-adjusted risk estimates and allow for phenocopies and familial clustering of age on onset. Agreement is high between the two approaches for ranking families as MGAD (Spearman rank [r] = .92). When either method is used, the numerical outcomes are sensitive to assumptions of the gene frequency and cumulative incidence of the disease in the population. Consequently, risk estimates should be used cautiously for counseling purposes; however, there are numerous valid applications of these procedures in genetic and epidemiological studies. 41 refs., 4 figs., 3 tabs.

  5. Semi-supervised dimensionality reduction using estimated class membership probabilities

    NASA Astrophysics Data System (ADS)

    Li, Wei; Ruan, Qiuqi; Wan, Jun

    2012-10-01

    In solving pattern-recognition tasks with partially labeled training data, the semi-supervised dimensionality reduction method, which considers both labeled and unlabeled data, is preferable for improving the classification and generalization capability of the testing data. Among such techniques, graph-based semi-supervised learning methods have attracted a lot of attention due to their appealing properties in discovering discriminative structure and geometric structure of data points. Although they have achieved remarkable success, they cannot promise good performance when the size of the labeled data set is small, as a result of inaccurate class matrix variance approximated by insufficient labeled training data. In this paper, we tackle this problem by combining class membership probabilities estimated from unlabeled data and ground-truth class information associated with labeled data to more precisely characterize the class distribution. Therefore, it is expected to enhance performance in classification tasks. We refer to this approach as probabilistic semi-supervised discriminant analysis (PSDA). The proposed PSDA is applied to face and facial expression recognition tasks and is evaluated using the ORL, Extended Yale B, and CMU PIE face databases and the Cohn-Kanade facial expression database. The promising experimental results demonstrate the effectiveness of our proposed method.

  6. Structural health monitoring and probability of detection estimation

    NASA Astrophysics Data System (ADS)

    Forsyth, David S.

    2016-02-01

    Structural health monitoring (SHM) methods are often based on nondestructive testing (NDT) sensors and are often proposed as replacements for NDT to lower cost and/or improve reliability. In order to take advantage of SHM for life cycle management, it is necessary to determine the Probability of Detection (POD) of the SHM system just as for traditional NDT to ensure that the required level of safety is maintained. Many different possibilities exist for SHM systems, but one of the attractive features of SHM versus NDT is the ability to take measurements very simply after the SHM system is installed. Using a simple statistical model of POD, some authors have proposed that very high rates of SHM system data sampling can result in high effective POD even in situations where an individual test has low POD. In this paper, we discuss the theoretical basis for determining the effect of repeated inspections, and examine data from SHM experiments against this framework to show how the effective POD from multiple tests can be estimated.

  7. Estimation of capture probabilities using generalized estimating equations and mixed effects approaches

    PubMed Central

    Akanda, Md Abdus Salam; Alpizar-Jara, Russell

    2014-01-01

    Modeling individual heterogeneity in capture probabilities has been one of the most challenging tasks in capture–recapture studies. Heterogeneity in capture probabilities can be modeled as a function of individual covariates, but correlation structure among capture occasions should be taking into account. A proposed generalized estimating equations (GEE) and generalized linear mixed modeling (GLMM) approaches can be used to estimate capture probabilities and population size for capture–recapture closed population models. An example is used for an illustrative application and for comparison with currently used methodology. A simulation study is also conducted to show the performance of the estimation procedures. Our simulation results show that the proposed quasi-likelihood based on GEE approach provides lower SE than partial likelihood based on either generalized linear models (GLM) or GLMM approaches for estimating population size in a closed capture–recapture experiment. Estimator performance is good if a large proportion of individuals are captured. For cases where only a small proportion of individuals are captured, the estimates become unstable, but the GEE approach outperforms the other methods. PMID:24772290

  8. Conditional Probabilities for Large Events Estimated by Small Earthquake Rate

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Li, Hsien-Chi

    2016-01-01

    We examined forecasting quiescence and activation models to obtain the conditional probability that a large earthquake will occur in a specific time period on different scales in Taiwan. The basic idea of the quiescence and activation models is to use earthquakes that have magnitudes larger than the completeness magnitude to compute the expected properties of large earthquakes. We calculated the probability time series for the whole Taiwan region and for three subareas of Taiwan—the western, eastern, and northeastern Taiwan regions—using 40 years of data from the Central Weather Bureau catalog. In the probability time series for the eastern and northeastern Taiwan regions, a high probability value is usually yielded in cluster events such as events with foreshocks and events that all occur in a short time period. In addition to the time series, we produced probability maps by calculating the conditional probability for every grid point at the time just before a large earthquake. The probability maps show that high probability values are yielded around the epicenter before a large earthquake. The receiver operating characteristic (ROC) curves of the probability maps demonstrate that the probability maps are not random forecasts, but also suggest that lowering the magnitude of a forecasted large earthquake may not improve the forecast method itself. From both the probability time series and probability maps, it can be observed that the probability obtained from the quiescence model increases before a large earthquake and the probability obtained from the activation model increases as the large earthquakes occur. The results lead us to conclude that the quiescence model has better forecast potential than the activation model.

  9. On estimating the fracture probability of nuclear graphite components

    NASA Astrophysics Data System (ADS)

    Srinivasan, Makuteswara

    2008-10-01

    The properties of nuclear grade graphites exhibit anisotropy and could vary considerably within a manufactured block. Graphite strength is affected by the direction of alignment of the constituent coke particles, which is dictated by the forming method, coke particle size, and the size, shape, and orientation distribution of pores in the structure. In this paper, a Weibull failure probability analysis for components is presented using the American Society of Testing Materials strength specification for nuclear grade graphites for core components in advanced high-temperature gas-cooled reactors. The risk of rupture (probability of fracture) and survival probability (reliability) of large graphite blocks are calculated for varying and discrete values of service tensile stresses. The limitations in these calculations are discussed from considerations of actual reactor environmental conditions that could potentially degrade the specification properties because of damage due to complex interactions between irradiation, temperature, stress, and variability in reactor operation.

  10. Analytical solution to transient Richards' equation with realistic water profiles for vertical infiltration and parameter estimation

    NASA Astrophysics Data System (ADS)

    Hayek, Mohamed

    2016-06-01

    A general analytical model for one-dimensional transient vertical infiltration is presented. The model is based on a combination of the Brooks and Corey soil water retention function and a generalized hydraulic conductivity function. This leads to power law diffusivity and convective term for which the exponents are functions of the inverse of the pore size distribution index. Accordingly, the proposed analytical solution covers many existing realistic models in the literature. The general form of the analytical solution is simple and it expresses implicitly the depth as function of water content and time. It can be used to model infiltration through semi-infinite dry soils with prescribed water content or flux boundary conditions. Some mathematical expressions of practical importance are also derived. The general form solution is useful for comparison between models, validation of numerical solutions and for better understanding the effect of some hydraulic parameters. Based on the analytical expression, a complete inverse procedure which allows the estimation of the hydraulic parameters from water content measurements is presented.

  11. Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation

    NASA Astrophysics Data System (ADS)

    Demir, Uygar; Toker, Cenk; Çenet, Duygu

    2016-07-01

    Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent

  12. Estimating background and threshold nitrate concentrations using probability graphs

    USGS Publications Warehouse

    Panno, S.V.; Kelly, W.R.; Martinsek, A.T.; Hackley, Keith C.

    2006-01-01

    Because of the ubiquitous nature of anthropogenic nitrate (NO 3-) in many parts of the world, determining background concentrations of NO3- in shallow ground water from natural sources is probably impossible in most environments. Present-day background must now include diffuse sources of NO3- such as disruption of soils and oxidation of organic matter, and atmospheric inputs from products of combustion and evaporation of ammonia from fertilizer and livestock waste. Anomalies can be defined as NO3- derived from nitrogen (N) inputs to the environment from anthropogenic activities, including synthetic fertilizers, livestock waste, and septic effluent. Cumulative probability graphs were used to identify threshold concentrations separating background and anomalous NO3-N concentrations and to assist in the determination of sources of N contamination for 232 spring water samples and 200 well water samples from karst aquifers. Thresholds were 0.4, 2.5, and 6.7 mg/L for spring water samples, and 0.1, 2.1, and 17 mg/L for well water samples. The 0.4 and 0.1 mg/L values are assumed to represent thresholds for present-day precipitation. Thresholds at 2.5 and 2.1 mg/L are interpreted to represent present-day background concentrations of NO3-N. The population of spring water samples with concentrations between 2.5 and 6.7 mg/L represents an amalgam of all sources of NO3- in the ground water basins that feed each spring; concentrations >6.7 mg/L were typically samples collected soon after springtime application of synthetic fertilizer. The 17 mg/L threshold (adjusted to 15 mg/L) for well water samples is interpreted as the level above which livestock wastes dominate the N sources. Copyright ?? 2006 The Author(s).

  13. Development of an integrated system for estimating human error probabilities

    SciTech Connect

    Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

  14. EMPIRICAL GENERAL POPULATION ASSESSMENT OF THE HORVITZ-THOMPSON ESTIMATOR UNDER VARIABLE PROBABILITY SAMPLING

    EPA Science Inventory

    The variance and two estimators of variance of the Horvitz-Thompson estimator were studied under randomized, variable probability systematic sampling. hree bivariate distributions, representing the populations, were investigated empirically, with each distribution studied for thr...

  15. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications. PMID:19885963

  16. Student Estimates of Probability and Uncertainty in Statistical Physics

    NASA Astrophysics Data System (ADS)

    Mountcastle, Donald B.; Bucy, B. R.; Thompson, J. R.

    2006-12-01

    Equilibrium properties of macroscopic (large N) systems are highly predictable as N approaches and exceeds Avogadro’s number. Theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity [S = k ln(w), where w is the system multiplicity] include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our students usually give reasonable answers about the probabilities, but not the uncertainties of the predicted outcomes of such events. However, they reliably predict that the uncertainty in a measured quantity (e.g., the amount of rainfall) decreases as the number of measurements increases. Typical textbook presentations presume that students will either have or develop the insight that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. That is at odds with our findings among students in two successive statistical mechanics classes. Many of our students had previously completed mathematics courses in statistics, as well as a physics laboratory course that included analysis of statistical properties of distributions of dart scores as the number (n) of throws (one-dimensional target) increased. There was a wide divergence of predictions about how the standard deviation of the distribution of dart scores should change, or not, as n increases. We find that student predictions about statistics of coin flips, dart scores, and rainfall amounts as functions of n are inconsistent at best. Supported in part by NSF Grant #PHY-0406764.

  17. Estimation of the size of a closed population when capture probabilities vary among animals

    USGS Publications Warehouse

    Burnham, K.P.; Overton, W.S.

    1978-01-01

    A model which allows capture probabilities to vary by individuals is introduced for multiple recapture studies n closed populations. The set of individual capture probabilities is modelled as a random sample from an arbitrary probability distribution over the unit interval. We show that the capture frequencies are a sufficient statistic. A nonparametric estimator of population size is developed based on the generalized jackknife; this estimator is found to be a linear combination of the capture frequencies. Finally, tests of underlying assumptions are presented.

  18. Estimating the Probability of Earthquake-Induced Landslides

    NASA Astrophysics Data System (ADS)

    McRae, M. E.; Christman, M. C.; Soller, D. R.; Sutter, J. F.

    2001-12-01

    The development of a regionally applicable, predictive model for earthquake-triggered landslides is needed to improve mitigation decisions at the community level. The distribution of landslides triggered by the 1994 Northridge earthquake in the Oat Mountain and Simi Valley quadrangles of southern California provided an inventory of failures against which to evaluate the significance of a variety of physical variables in probabilistic models of static slope stability. Through a cooperative project, the California Division of Mines and Geology provided 10-meter resolution data on elevation, slope angle, coincidence of bedding plane and topographic slope, distribution of pre-Northridge landslides, internal friction angle and cohesive strength of individual geologic units. Hydrologic factors were not evaluated since failures in the study area were dominated by shallow, disrupted landslides in dry materials. Previous studies indicate that 10-meter digital elevation data is required to properly characterize the short, steep slopes on which many earthquake-induced landslides occur. However, to explore the robustness of the model at different spatial resolutions, models were developed at the 10, 50, and 100-meter resolution using classification and regression tree (CART) analysis and logistic regression techniques. Multiple resampling algorithms were tested for each variable in order to observe how resampling affects the statistical properties of each grid, and how relationships between variables within the model change with increasing resolution. Various transformations of the independent variables were used to see which had the strongest relationship with the probability of failure. These transformations were based on deterministic relationships in the factor of safety equation. Preliminary results were similar for all spatial scales. Topographic variables dominate the predictive capability of the models. The distribution of prior landslides and the coincidence of slope

  19. A simulation model for estimating probabilities of defects in welds

    SciTech Connect

    Chapman, O.J.V.; Khaleel, M.A.; Simonen, F.A.

    1996-12-01

    In recent work for the US Nuclear Regulatory Commission in collaboration with Battelle Pacific Northwest National Laboratory, Rolls-Royce and Associates, Ltd., has adapted an existing model for piping welds to address welds in reactor pressure vessels. This paper describes the flaw estimation methodology as it applies to flaws in reactor pressure vessel welds (but not flaws in base metal or flaws associated with the cladding process). Details of the associated computer software (RR-PRODIGAL) are provided. The approach uses expert elicitation and mathematical modeling to simulate the steps in manufacturing a weld and the errors that lead to different types of weld defects. The defects that may initiate in weld beads include center cracks, lack of fusion, slag, pores with tails, and cracks in heat affected zones. Various welding processes are addressed including submerged metal arc welding. The model simulates the effects of both radiographic and dye penetrant surface inspections. Output from the simulation gives occurrence frequencies for defects as a function of both flaw size and flaw location (surface connected and buried flaws). Numerical results are presented to show the effects of submerged metal arc versus manual metal arc weld processes.

  20. Probability Estimation of CO2 Leakage Through Faults at Geologic Carbon Sequestration Sites

    SciTech Connect

    Zhang, Yingqi; Oldenburg, Curt; Finsterle, Stefan; Jordan, Preston; Zhang, Keni

    2008-11-01

    Leakage of CO{sub 2} and brine along faults at geologic carbon sequestration (GCS) sites is a primary concern for storage integrity. The focus of this study is on the estimation of the probability of leakage along faults or fractures. This leakage probability is controlled by the probability of a connected network of conduits existing at a given site, the probability of this network encountering the CO{sub 2} plume, and the probability of this network intersecting environmental resources that may be impacted by leakage. This work is designed to fit into a risk assessment and certification framework that uses compartments to represent vulnerable resources such as potable groundwater, health and safety, and the near-surface environment. The method we propose includes using percolation theory to estimate the connectivity of the faults, and generating fuzzy rules from discrete fracture network simulations to estimate leakage probability. By this approach, the probability of CO{sub 2} escaping into a compartment for a given system can be inferred from the fuzzy rules. The proposed method provides a quick way of estimating the probability of CO{sub 2} or brine leaking into a compartment. In addition, it provides the uncertainty range of the estimated probability.

  1. Estimating site occupancy and species detection probability parameters for terrestrial salamanders

    USGS Publications Warehouse

    Bailey, L.L.; Simons, T.R.; Pollock, K.H.

    2004-01-01

    Recent, worldwide amphibian declines have highlighted a need for more extensive and rigorous monitoring programs to document species occurrence and detect population change. Abundance estimation methods, such as mark-recapture, are often expensive and impractical for large-scale or long-term amphibian monitoring. We apply a new method to estimate proportion of area occupied using detection/nondetection data from a terrestrial salamander system in Great Smoky Mountains National Park. Estimated species-specific detection probabilities were all <1 and varied among seven species and four sampling methods. Time (i.e., sampling occasion) and four large-scale habitat characteristics (previous disturbance history, vegetation type, elevation, and stream presence) were important covariates in estimates of both proportion of area occupied and detection probability. All sampling methods were consistent in their ability to identify important covariates for each salamander species. We believe proportion of area occupied represents a useful state variable for large-scale monitoring programs. However, our results emphasize the importance of estimating detection and occupancy probabilities rather than using an unadjusted proportion of sites where species are observed where actual occupancy probabilities are confounded with detection probabilities. Estimated detection probabilities accommodate variations in sampling effort; thus comparisons of occupancy probabilities are possible among studies with different sampling protocols.

  2. Children's Ability to Make Probability Estimates: Skills Revealed through Application of Anderson's Functional Measurement Methodology.

    ERIC Educational Resources Information Center

    Acredolo, Curt; And Others

    1989-01-01

    Two studies assessed 90 elementary school students' attention to the total number of alternative and target outcomes when making probability estimates. All age groups attended to variations in the denominator and numerator and the interaction between these variables. (RJC)

  3. Coupling of realistic rate estimates with genomic for Assessing Contaminant Attenuation and Long-Term Phone

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2003-06-01

    Dissolved dense nonaqueous-phase liquid plumes are persistent, widespread problems in the DOE complex. While perceived as being difficult to degrade, at the Idaho National Engineering and Environmental Laboratory, dissolved trichloroethylene (TCE) is disappearing from the Snake River Plain aquifer (SRPA) by natural attenuation, a finding that saves significant site restoration costs. Acceptance of monitored natural attenuation as a preferred treatment technology requires direct proof of the process and rate of the degradation. Our proposal aims to provide that proof for one such site by testing two hypotheses. First, we believe that realistic values for in situ rates of TCE cometabolism can be obtained by sustaining the putative microorganisms at the low catabolic activities consistent with aquifer conditions. Second, the patterns of functional gene expression evident in these communities under starvation conditions while carrying out TCE cometabolism can be used to diagnose the cometabolic activity in the aquifer itself. Using the cometabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained at this location and validate the long term stewardship of this plume. Realistic terms for cometabolism of TCE will provide marked improvements in DOE's ability to predict and monitor natural attenuation of chlorinated organics at other sites, increase the acceptability of this solution, and provide significant economic and health benefits through this noninvasive remediation strategy. Finally, this project will derive valuable genomic information about the functional attributes of subsurface microbial communities upon which DOE must depend to resolve some of its most difficult contamination issues.

  4. Estimating functions of probability distributions from a finite set of samples

    SciTech Connect

    Wolpert, D.H.; Wolf, D.R. |

    1995-12-01

    This paper addresses the problem of estimating a function of a probability distribution from a finite set of samples of that distribution. A Bayesian analysis of this problem is presented, the optimal properties of the Bayes estimators are discussed, and as an example of the formalism, closed form expressions for the Bayes estimators for the moments of the Shannon entropy function are derived. Then numerical results are presented that compare the Bayes estimator to the frequency-counts estimator for the Shannon entropy. We also present the closed form estimators, all derived elsewhere, for the mutual information, {chi}{sup 2} covariance, and some other statistics. (c) 1995 The American Physical Society

  5. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  6. Generalizations and Extensions of the Probability of Superiority Effect Size Estimator

    ERIC Educational Resources Information Center

    Ruscio, John; Gera, Benjamin Lee

    2013-01-01

    Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…

  7. Improving quality of sample entropy estimation for continuous distribution probability functions

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2016-05-01

    Entropy is a one of the key parameters characterizing state of system in statistical physics. Although, the entropy is defined for systems described by discrete and continuous probability distribution function (PDF), in numerous applications the sample entropy is estimated by a histogram, which, in fact, denotes that the continuous PDF is represented by a set of probabilities. Such a procedure may lead to ambiguities and even misinterpretation of the results. Within this paper, two possible general algorithms based on continuous PDF estimation are discussed in the application to the Shannon and Tsallis entropies. It is shown that the proposed algorithms may improve entropy estimation, particularly in the case of small data sets.

  8. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  9. Estimating transition probabilities for stage-based population projection matrices using capture-recapture data

    USGS Publications Warehouse

    Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.

    1992-01-01

    In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).

  10. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The

  11. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2002-01-01

    Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (~90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.

  12. Estimating probabilities of reservoir storage for the upper Delaware River basin

    USGS Publications Warehouse

    Hirsch, Robert M.

    1981-01-01

    A technique for estimating conditional probabilities of reservoir system storage is described and applied to the upper Delaware River Basin. The results indicate that there is a 73 percent probability that the three major New York City reservoirs (Pepacton, Cannonsville, and Neversink) would be full by June 1, 1981, and only a 9 percent probability that storage would return to the ' drought warning ' sector of the operations curve sometime in the next year. In contrast, if restrictions are lifted and there is an immediate return to normal operating policies, the probability of the reservoir system being full by June 1 is 37 percent and the probability that storage would return to the ' drought warning ' sector in the next year is 30 percent. (USGS)

  13. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  14. Incorporating diverse data and realistic complexity into demographic estimation procedures for sea otters

    USGS Publications Warehouse

    Tinker, M.T.; Doak, D.F.; Estes, J.A.; Hatfield, B.B.; Staedler, M.M.; Bodkin, J.L.

    2006-01-01

    Reliable information on historical and current population dynamics is central to understanding patterns of growth and decline in animal populations. We developed a maximum likelihood-based analysis to estimate spatial and temporal trends in age/sex-specific survival rates for the threatened southern sea otter (Enhydra lutris nereis), using annual population censuses and the age structure of salvaged carcass collections. We evaluated a wide range of possible spatial and temporal effects and used model averaging to incorporate model uncertainty into the resulting estimates of key vital rates and their variances. We compared these results to current demographic parameters estimated in a telemetry-based study conducted between 2001 and 2004. These results show that survival has decreased substantially from the early 1990s to the present and is generally lowest in the north-central portion of the population's range. The greatest temporal decrease in survival was for adult females, and variation in the survival of this age/sex class is primarily responsible for regulating population growth and driving population trends. Our results can be used to focus future research on southern sea otters by highlighting the life history stages and mortality factors most relevant to conservation. More broadly, we have illustrated how the powerful and relatively straightforward tools of information-theoretic-based model fitting can be used to sort through and parameterize quite complex demographic modeling frameworks. ?? 2006 by the Ecological Society of America.

  15. Differential Survival in Europe and the United States: Estimates Based on Subjective Probabilities of Survival

    PubMed Central

    Delavande, Adeline; Rohwedder, Susann

    2013-01-01

    Cross-country comparisons of differential survival by socioeconomic status (SES) are useful in many domains. Yet, to date, such studies have been rare. Reliably estimating differential survival in a single country has been challenging because it requires rich panel data with a large sample size. Cross-country estimates have proven even more difficult because the measures of SES need to be comparable internationally. We present an alternative method for acquiring information on differential survival by SES. Rather than using observations of actual survival, we relate individuals’ subjective probabilities of survival to SES variables in cross section. To show that subjective survival probabilities are informative proxies for actual survival when estimating differential survival, we compare estimates of differential survival based on actual survival with estimates based on subjective probabilities of survival for the same sample. The results are remarkably similar. We then use this approach to compare differential survival by SES for 10 European countries and the United States. Wealthier people have higher survival probabilities than those who are less wealthy, but the strength of the association differs across countries. Nations with a smaller gradient appear to be Belgium, France, and Italy, while the United States, England, and Sweden appear to have a larger gradient. PMID:22042664

  16. Estimating the posterior probability that genome-wide association findings are true or false

    PubMed Central

    Bukszár, József; McClay, Joseph L.; van den Oord, Edwin J. C. G.

    2009-01-01

    Motivation: A limitation of current methods used to declare significance in genome-wide association studies (GWAS) is that they do not provide clear information about the probability that GWAS findings are true of false. This lack of information increases the chance of false discoveries and may result in real effects being missed. Results: We propose a method to estimate the posterior probability that a marker has (no) effect given its test statistic value, also called the local false discovery rate (FDR), in the GWAS. A critical step involves the estimation the parameters of the distribution of the true alternative tests. For this, we derived and implemented the real maximum likelihood function, which turned out to provide us with significantly more accurate estimates than the widely used mixture model likelihood. Actual GWAS data are used to illustrate properties of the posterior probability estimates empirically. In addition to evaluating individual markers, a variety of applications are conceivable. For instance, posterior probability estimates can be used to control the FDR more precisely than Benjamini–Hochberg procedure. Availability: The codes are freely downloadable from the web site http://www.people.vcu.edu/∼jbukszar. Contact: jbukszar@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19420056

  17. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  18. Estimating probability densities from short samples: A parametric maximum likelihood approach

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Floriani, E.

    1998-10-01

    A parametric method similar to autoregressive spectral estimators is proposed to determine the probability density function (PDF) of a random set. The method proceeds by maximizing the likelihood of the PDF, yielding estimates that perform equally well in the tails as in the bulk of the distribution. It is therefore well suited for the analysis of short sets drawn from smooth PDF's and stands out by the simplicity of its computational scheme. Its advantages and limitations are discussed.

  19. Preliminary Estimation of the Realistic Optimum Temperature for Vegetation Growth in China

    NASA Astrophysics Data System (ADS)

    Cui, Yaoping

    2013-07-01

    The estimation of optimum temperature of vegetation growth is very useful for a wide range of applications such as agriculture and climate change studies. Thermal conditions substantially affect vegetation growth. In this study, the normalized difference vegetation index (NDVI) and daily temperature data set from 1982 to 2006 for China were used to examine optimum temperature of vegetation growth. Based on a simple analysis of ecological amplitude and Shelford's law of tolerance, a scientific framework for calculating the optimum temperature was constructed. The optimum temperature range and referenced optimum temperature (ROT) of terrestrial vegetation were obtained and explored over different eco-geographical regions of China. The results showed that the relationship between NDVI and air temperature was significant over almost all of China, indicating that terrestrial vegetation growth was closely related to thermal conditions. ROTs were different in various regions. The lowest ROT, about 7.0 °C, occurred in the Qinghai-Tibet Plateau, while the highest ROT, more than 22.0 °C, occurred in the middle and lower reaches of the Yangtze River and the Southern China region.

  20. Hitchhikers on trade routes: A phenology model estimates the probabilities of gypsy moth introduction and establishment.

    PubMed

    Gray, David R

    2010-12-01

    As global trade increases so too does the probability of introduction of alien species to new locations. Estimating the probability of an alien species introduction and establishment following introduction is a necessary step in risk estimation (probability of an event times the consequences, in the currency of choice, of the event should it occur); risk estimation is a valuable tool for reducing the risk of biological invasion with limited resources. The Asian gypsy moth, Lymantria dispar (L.), is a pest species whose consequence of introduction and establishment in North America and New Zealand warrants over US$2 million per year in surveillance expenditure. This work describes the development of a two-dimensional phenology model (GLS-2d) that simulates insect development from source to destination and estimates: (1) the probability of introduction from the proportion of the source population that would achieve the next developmental stage at the destination and (2) the probability of establishment from the proportion of the introduced population that survives until a stable life cycle is reached at the destination. The effect of shipping schedule on the probabilities of introduction and establishment was examined by varying the departure date from 1 January to 25 December by weekly increments. The effect of port efficiency was examined by varying the length of time that invasion vectors (shipping containers and ship) were available for infection. The application of GLS-2d is demonstrated using three common marine trade routes (to Auckland, New Zealand, from Kobe, Japan, and to Vancouver, Canada, from Kobe and from Vladivostok, Russia). PMID:21265459

  1. Simple indicator kriging for estimating the probability of incorrectly delineating hazardous areas in a contaminated site

    SciTech Connect

    Juang, K.W.; Lee, D.Y.

    1998-09-01

    The probability of incorrectly delineating hazardous areas in a contaminated site is very important for decision-makers because it indicates the magnitude of confidence that decision-makers have in determining areas in need of remediation. In this study, simple indicator kriging (SIK) was used to estimate the probability of incorrectly delineating hazardous areas in a heavy metal-contaminated site, which is located at Taoyuan, Taiwan, and is about 10 ha in area. In the procedure, the values 0 and 1 were assigned to be the stationary means of the indicator codes in the SIK model to represent two hypotheses, hazardous and safe, respectively. The spatial distribution of the conditional probability of heavy metal concentrations lower than a threshold, given each hypothesis, was estimated using SIK. Then, the probabilities of false positives ({alpha}) (i.e., the probability of declaring a location hazardous when it is not) and false negatives ({beta}) (i.e., the probability of declaring a location safe when it is not) in delineating hazardous areas for the heavy metal-contaminated site could be obtained. The spatial distribution of the probabilities of false positives and false negatives could help in delineating hazardous areas based on a tolerable probability level of incorrect delineation. In addition, delineation complicated by the cost of remediation, hazards in the environment, and hazards to human health could be made based on the minimum values of {alpha} and {beta}. The results suggest that the proposed SIK procedure is useful for decision-makers who need to delineate hazardous areas in a heavy metal-contaminated site.

  2. Estimating site occupancy and detection probability parameters for meso- and large mammals in a coastal eosystem

    USGS Publications Warehouse

    O'Connell, A.F., Jr.; Talancy, N.W.; Bailey, L.L.; Sauer, J.R.; Cook, R.; Gilbert, A.T.

    2006-01-01

    Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.

  3. Estimating the Probability of Elevated Nitrate Concentrations in Ground Water in Washington State

    USGS Publications Warehouse

    Frans, Lonna M.

    2008-01-01

    Logistic regression was used to relate anthropogenic (manmade) and natural variables to the occurrence of elevated nitrate concentrations in ground water in Washington State. Variables that were analyzed included well depth, ground-water recharge rate, precipitation, population density, fertilizer application amounts, soil characteristics, hydrogeomorphic regions, and land-use types. Two models were developed: one with and one without the hydrogeomorphic regions variable. The variables in both models that best explained the occurrence of elevated nitrate concentrations (defined as concentrations of nitrite plus nitrate as nitrogen greater than 2 milligrams per liter) were the percentage of agricultural land use in a 4-kilometer radius of a well, population density, precipitation, soil drainage class, and well depth. Based on the relations between these variables and measured nitrate concentrations, logistic regression models were developed to estimate the probability of nitrate concentrations in ground water exceeding 2 milligrams per liter. Maps of Washington State were produced that illustrate these estimated probabilities for wells drilled to 145 feet below land surface (median well depth) and the estimated depth to which wells would need to be drilled to have a 90-percent probability of drawing water with a nitrate concentration less than 2 milligrams per liter. Maps showing the estimated probability of elevated nitrate concentrations indicated that the agricultural regions are most at risk followed by urban areas. The estimated depths to which wells would need to be drilled to have a 90-percent probability of obtaining water with nitrate concentrations less than 2 milligrams per liter exceeded 1,000 feet in the agricultural regions; whereas, wells in urban areas generally would need to be drilled to depths in excess of 400 feet.

  4. Estimates of annual survival probabilities for adult Florida manatees (Trichechus manatus latirostris)

    USGS Publications Warehouse

    Langtimm, C.A.; O'Shea, T.J.; Pradel, R.; Beck, C.A.

    1998-01-01

    The population dynamics of large, long-lived mammals are particularly sensitive to changes in adult survival. Understanding factors affecting survival patterns is therefore critical for developing and testing theories of population dynamics and for developing management strategies aimed at preventing declines or extinction in such taxa. Few studies have used modern analytical approaches for analyzing variation and testing hypotheses about survival probabilities in large mammals. This paper reports a detailed analysis of annual adult survival in the Florida manatee (Trichechus manatus latirostris), an endangered marine mammal, based on a mark-recapture approach. Natural and boat-inflicted scars distinctively 'marked' individual manatees that were cataloged in a computer-based photographic system. Photo-documented resightings provided 'recaptures.' Using open population models, annual adult-survival probabilities were estimated for manatees observed in winter in three areas of Florida: Blue Spring, Crystal River, and the Atlantic coast. After using goodness-of-fit tests in Program RELEASE to search for violations of the assumptions of mark-recapture analysis, survival and sighting probabilities were modeled under several different biological hypotheses with Program SURGE. Estimates of mean annual probability of sighting varied from 0.948 for Blue Spring to 0.737 for Crystal River and 0.507 for the Atlantic coast. At Crystal River and Blue Spring, annual survival probabilities were best estimated as constant over the study period at 0.96 (95% CI = 0.951-0.975 and 0.900-0.985, respectively). On the Atlantic coast, where manatees are impacted more by human activities, annual survival probabilities had a significantly lower mean estimate of 0.91 (95% CI = 0.887-0.926) and varied unpredictably over the study period. For each study area, survival did not differ between sexes and was independent of relative adult age. The high constant adult-survival probabilities estimated

  5. Estimating site occupancy rates when detection probabilities are less than one

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.

    2002-01-01

    Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.

  6. On the Estimation of Detection Probabilities for Sampling Stream-Dwelling Fishes.

    SciTech Connect

    Peterson, James T.

    1999-11-01

    To examine the adequacy of fish probability of detection estimates, I examined distributional properties of survey and monitoring data for bull trout (Salvelinus confluentus), brook trout (Salvelinus fontinalis), westslope cutthroat trout (Oncorhynchus clarki lewisi), chinook salmon parr (Oncorhynchus tshawytscha), and steelhead /redband trout (Oncorhynchus mykiss spp.), from 178 streams in the Interior Columbia River Basin. Negative binomial dispersion parameters varied considerably among species and streams, but were significantly (P<0.05) positively related to fish density. Across streams, the variances in fish abundances differed greatly among species and indicated that the data for all species were overdispersed with respect to the Poisson (i.e., the variances exceeded the means). This significantly affected Poisson probability of detection estimates, which were the highest across species and were, on average, 3.82, 2.66, and 3.47 times greater than baseline values. Required sample sizes for species detection at the 95% confidence level were also lowest for the Poisson, which underestimated sample size requirements an average of 72% across species. Negative binomial and Poisson-gamma probability of detection and sample size estimates were more accurate than the Poisson and generally less than 10% from baseline values. My results indicate the Poisson and binomial assumptions often are violated, which results in probability of detection estimates that are biased high and sample size estimates that are biased low. To increase the accuracy of these estimates, I recommend that future studies use predictive distributions than can incorporate multiple sources of uncertainty or excess variance and that all distributional assumptions be explicitly tested.

  7. Effects of prior detections on estimates of detection probability, abundance, and occupancy

    USGS Publications Warehouse

    Riddle, Jason D.; Mordecai, Rua S.; Pollock, Kenneth H.; Simons, Theodore R.

    2010-01-01

    Survey methods that account for detection probability often require repeated detections of individual birds or repeated visits to a site to conduct Counts or collect presence-absence data. Initial encounters with individual species or individuals of a species could influence detection probabilities for subsequent encounters. For example, observers may be more likely to redetect a species or individual once they are aware of the presence of that species or individual at a particular site. Not accounting for these effects could result in biased estimators of detection probability, abundance, and occupancy. We tested for effects of prior detections in three data sets that differed dramatically by species, geographic location, and method of counting birds. We found strong support (AIC weights from 83% to 100%) for models that allowed for the effects of prior detections. These models produced estimates of detection probability, abundance, and occupancy that differed substantially from those produced by models that ignored the effects of prior detections. We discuss the consequences of the effects of prior detections on estimation for several sampling methods and provide recommendations for avoiding these effects through survey design or by modeling them when they cannot be avoided. 

  8. Fundamental questions of earthquake statistics, source behavior, and the estimation of earthquake probabilities from possible foreshocks

    USGS Publications Warehouse

    Michael, Andrew J.

    2012-01-01

    Estimates of the probability that an ML 4.8 earthquake, which occurred near the southern end of the San Andreas fault on 24 March 2009, would be followed by an M 7 mainshock over the following three days vary from 0.0009 using a Gutenberg–Richter model of aftershock statistics (Reasenberg and Jones, 1989) to 0.04 using a statistical model of foreshock behavior and long‐term estimates of large earthquake probabilities, including characteristic earthquakes (Agnew and Jones, 1991). I demonstrate that the disparity between the existing approaches depends on whether or not they conform to Gutenberg–Richter behavior. While Gutenberg–Richter behavior is well established over large regions, it could be violated on individual faults if they have characteristic earthquakes or over small areas if the spatial distribution of large‐event nucleations is disproportional to the rate of smaller events. I develop a new form of the aftershock model that includes characteristic behavior and combines the features of both models. This new model and the older foreshock model yield the same results when given the same inputs, but the new model has the advantage of producing probabilities for events of all magnitudes, rather than just for events larger than the initial one. Compared with the aftershock model, the new model has the advantage of taking into account long‐term earthquake probability models. Using consistent parameters, the probability of an M 7 mainshock on the southernmost San Andreas fault is 0.0001 for three days from long‐term models and the clustering probabilities following the ML 4.8 event are 0.00035 for a Gutenberg–Richter distribution and 0.013 for a characteristic‐earthquake magnitude–frequency distribution. Our decisions about the existence of characteristic earthquakes and how large earthquakes nucleate have a first‐order effect on the probabilities obtained from short‐term clustering models for these large events.

  9. Estimating the Upper Limit of Lifetime Probability Distribution, Based on Data of Japanese Centenarians.

    PubMed

    Hanayama, Nobutane; Sibuya, Masaaki

    2016-08-01

    In modern biology, theories of aging fall mainly into two groups: damage theories and programed theories. If programed theories are true, the probability that human beings live beyond a specific age will be zero. In contrast, if damage theories are true, such an age does not exist, and a longevity record will be eventually destroyed. In this article, for examining real state, a special type of binomial model based on the generalized Pareto distribution has been applied to data of Japanese centenarians. From the results, it is concluded that the upper limit of lifetime probability distribution in the Japanese population has been estimated 123 years. PMID:26362439

  10. Using counts to simultaneously estimate abundance and detection probabilities in a salamander community

    USGS Publications Warehouse

    Dodd, C.K., Jr.; Dorazio, R.M.

    2004-01-01

    A critical variable in both ecological and conservation field studies is determining how many individuals of a species are present within a defined sampling area. Labor intensive techniques such as capture-mark-recapture and removal sampling may provide estimates of abundance, but there are many logistical constraints to their widespread application. Many studies on terrestrial and aquatic salamanders use counts as an index of abundance, assuming that detection remains constant while sampling. If this constancy is violated, determination of detection probabilities is critical to the accurate estimation of abundance. Recently, a model was developed that provides a statistical approach that allows abundance and detection to be estimated simultaneously from spatially and temporally replicated counts. We adapted this model to estimate these parameters for salamanders sampled over a six vear period in area-constrained plots in Great Smoky Mountains National Park. Estimates of salamander abundance varied among years, but annual changes in abundance did not vary uniformly among species. Except for one species, abundance estimates were not correlated with site covariates (elevation/soil and water pH, conductivity, air and water temperature). The uncertainty in the estimates was so large as to make correlations ineffectual in predicting which covariates might influence abundance. Detection probabilities also varied among species and sometimes among years for the six species examined. We found such a high degree of variation in our counts and in estimates of detection among species, sites, and years as to cast doubt upon the appropriateness of using count data to monitor population trends using a small number of area-constrained survey plots. Still, the model provided reasonable estimates of abundance that could make it useful in estimating population size from count surveys.

  11. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  12. Estimating the absolute position of a mobile robot using position probability grids

    SciTech Connect

    Burgard, W.; Fox, D.; Hennig, D.; Schmidt, T.

    1996-12-31

    In order to re-use existing models of the environment mobile robots must be able to estimate their position and orientation in such models. Most of the existing methods for position estimation are based on special purpose sensors or aim at tracking the robot`s position relative to the known starting point. This paper describes the position probability grid approach to estimating the robot`s absolute position and orientation in a metric model of the environment. Our method is designed to work with standard sensors and is independent of any knowledge about the starting point. It is a Bayesian approach based on certainty grids. In each cell of such a grid we store the probability that this cell refers to the current position of the robot. These probabilities are obtained by integrating the likelihoods of sensor readings over time. Results described in this paper show that our technique is able to reliably estimate the position of a robot in complex environments. Our approach has proven to be robust with respect to inaccurate environmental models, noisy sensors, and ambiguous situations.

  13. Estimating the probability density of the scattering cross section from Rayleigh scattering experiments

    NASA Astrophysics Data System (ADS)

    Hengartner, Nicolas; Talbot, Lawrence; Shepherd, Ian; Bickel, Peter

    1995-06-01

    An important parameter in the experimental study of dynamics of combustion is the probability distribution of the effective Rayleigh scattering cross section. This cross section cannot be observed directly. Instead, pairs of measurements of laser intensities and Rayleigh scattering counts are observed. Our aim is to provide estimators for the probability density function of the scattering cross section from such measurements. The probability distribution is derived first for the number of recorded photons in the Rayleigh scattering experiment. In this approach the laser intensity measurements are treated as known covariates. This departs from the usual practice of normalizing the Rayleigh scattering counts by the laser intensities. For distributions supported on finite intervals two one based on expansion of the density in

  14. Probability based remaining capacity estimation using data-driven and neural network model

    NASA Astrophysics Data System (ADS)

    Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai

    2016-05-01

    Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.

  15. Estimating migratory connectivity of birds when re-encounter probabilities are heterogeneous

    USGS Publications Warehouse

    Cohen, Emily B.; Hostelter, Jeffrey A.; Royle, J. Andrew; Marra, Peter P.

    2014-01-01

    Understanding the biology and conducting effective conservation of migratory species requires an understanding of migratory connectivity – the geographic linkages of populations between stages of the annual cycle. Unfortunately, for most species, we are lacking such information. The North American Bird Banding Laboratory (BBL) houses an extensive database of marking, recaptures and recoveries, and such data could provide migratory connectivity information for many species. To date, however, few species have been analyzed for migratory connectivity largely because heterogeneous re-encounter probabilities make interpretation problematic. We accounted for regional variation in re-encounter probabilities by borrowing information across species and by using effort covariates on recapture and recovery probabilities in a multistate capture–recapture and recovery model. The effort covariates were derived from recaptures and recoveries of species within the same regions. We estimated the migratory connectivity for three tern species breeding in North America and over-wintering in the tropics, common (Sterna hirundo), roseate (Sterna dougallii), and Caspian terns (Hydroprogne caspia). For western breeding terns, model-derived estimates of migratory connectivity differed considerably from those derived directly from the proportions of re-encounters. Conversely, for eastern breeding terns, estimates were merely refined by the inclusion of re-encounter probabilities. In general, eastern breeding terns were strongly connected to eastern South America, and western breeding terns were strongly linked to the more western parts of the nonbreeding range under both models. Through simulation, we found this approach is likely useful for many species in the BBL database, although precision improved with higher re-encounter probabilities and stronger migratory connectivity. We describe an approach to deal with the inherent biases in BBL banding and re-encounter data to demonstrate

  16. How should detection probability be incorporated into estimates of relative abundance?

    USGS Publications Warehouse

    MacKenzie, D.I.; Kendall, W.L.

    2002-01-01

    Determination of the relative abundance of two populations, separated by time or space, is of interest in many ecological situations. We focus on two estimators of relative abundance, which assume that the probability that an individual is detected at least once in the survey is either equal or unequal for the two populations. We present three methods for incorporating the collected information into our inference. The first method, proposed previously, is a traditional hypothesis test for evidence that detection probabilities are unequal. However, we feel that, a priori, it is more likely that detection probabilities are actually different; hence, the burden of proof should be shifted, requiring evidence that detection probabilities are practically equivalent. The second method we present, equivalence testing, is one approach to doing so. Third, we suggest that model averaging could be used by combining the two estimators according to derived model weights. These differing approaches are applied to a mark-recapture experiment on Nuttail's cottontail rabbit (Sylvilagus nuttallii) conducted in central Oregon during 1974 and 1975, which has been previously analyzed by other authors.

  17. Estimating conditional probability of volcanic flows for forecasting event distribution and making evacuation decisions

    NASA Astrophysics Data System (ADS)

    Stefanescu, E. R.; Patra, A.; Sheridan, M. F.; Cordoba, G.

    2012-04-01

    In this study we propose a conditional probability framework for Galeras volcano, which is one of the most active volcanoes on the world. Nearly 400,000 people currently live near the volcano; 10,000 of them reside within the zone of high volcanic hazard. Pyroclastic flows pose a major hazard for this population. Some of the questions we try to answer when studying conditional probabilities for volcanic hazards are: "Should a village be evacuated and villagers moved to a different location?", "Should we construct a road along this valley or along a different one?", "Should this university be evacuated?" Here, we try to identify critical regions such as villages, infrastructures, cities, university to determine their relative probability of inundation in case of an volcanic eruption. In this study, a set of numerical simulation were performed using a computational tool TITAN2D which simulates granular flow over digital representation of the natural terrain. The particular choice from among the methods described below can be based on the amount of information necessary in the evacuation decision and on the complexity of the analysis required in taking such decision. A set of 4200 TITAN2D runs were performed for several different location so that the area of all probably vents is covered. The output of the geophysical model provides a flow map which contains the maximum flow depth over time. Frequency approach - In estimating the conditional probability of volcanic flows we define two discrete random variables (r.v.) A and B, where P(A =1) and P(B=1) represents the probability of having a flow at location A, and B, respectively. For this analysis we choose two critical locations identified by their UTM coordinates. The flow map is then used in identifying at the pixel level, flow or non-flow at the two locations. By counting the number of times there is flow or non-flow, we are able to find the marginal probabilities along with the joint probability associated with an

  18. Probability estimates of seismic event occurrence compared to health hazards - Forecasting Taipei's Earthquakes

    NASA Astrophysics Data System (ADS)

    Fung, D. C. N.; Wang, J. P.; Chang, S. H.; Chang, S. C.

    2014-12-01

    Using a revised statistical model built on past seismic probability models, the probability of different magnitude earthquakes occurring within variable timespans can be estimated. The revised model is based on Poisson distribution and includes the use of best-estimate values of the probability distribution of different magnitude earthquakes recurring from a fault from literature sources. Our study aims to apply this model to the Taipei metropolitan area with a population of 7 million, which lies in the Taipei Basin and is bounded by two normal faults: the Sanchaio and Taipei faults. The Sanchaio fault is suggested to be responsible for previous large magnitude earthquakes, such as the 1694 magnitude 7 earthquake in northwestern Taipei (Cheng et. al., 2010). Based on a magnitude 7 earthquake return period of 543 years, the model predicts the occurrence of a magnitude 7 earthquake within 20 years at 1.81%, within 79 years at 6.77% and within 300 years at 21.22%. These estimates increase significantly when considering a magnitude 6 earthquake; the chance of one occurring within the next 20 years is estimated to be 3.61%, 79 years at 13.54% and 300 years at 42.45%. The 79 year period represents the average lifespan of the Taiwan population. In contrast, based on data from 2013, the probability of Taiwan residents experiencing heart disease or malignant neoplasm is 11.5% and 29%. The inference of this study is that the calculated risk that the Taipei population is at from a potentially damaging magnitude 6 or greater earthquake occurring within their lifetime is just as great as of suffering from a heart attack or other health ailments.

  19. PIGS: improved estimates of identity-by-descent probabilities by probabilistic IBD graph sampling.

    PubMed

    Park, Danny S; Baran, Yael; Hormozdiari, Farhad; Eng, Celeste; Torgerson, Dara G; Burchard, Esteban G; Zaitlen, Noah

    2015-01-01

    Identifying segments in the genome of different individuals that are identical-by-descent (IBD) is a fundamental element of genetics. IBD data is used for numerous applications including demographic inference, heritability estimation, and mapping disease loci. Simultaneous detection of IBD over multiple haplotypes has proven to be computationally difficult. To overcome this, many state of the art methods estimate the probability of IBD between each pair of haplotypes separately. While computationally efficient, these methods fail to leverage the clique structure of IBD resulting in less powerful IBD identification, especially for small IBD segments. PMID:25860540

  20. Weighted least square estimates of the parameters of a model of survivorship probabilities.

    PubMed

    Mitra, S

    1987-06-01

    "A weighted regression has been fitted to estimate the parameters of a model involving functions of survivorship probability and age. Earlier, the parameters were estimated by the method of ordinary least squares and the results were very encouraging. However, a multiple regression equation passing through the origin has been found appropriate for the present model from statistical consideration. Fortunately, this method, while methodologically more sophisticated, has a slight edge over the former as evidenced by the respective measures of reproducibility in the model and actual life tables selected for this study." PMID:12281212

  1. Survival probabilities with time-dependent treatment indicator: quantities and non-parametric estimators.

    PubMed

    Bernasconi, Davide Paolo; Rebora, Paola; Iacobelli, Simona; Valsecchi, Maria Grazia; Antolini, Laura

    2016-03-30

    The 'landmark' and 'Simon and Makuch' non-parametric estimators of the survival function are commonly used to contrast the survival experience of time-dependent treatment groups in applications such as stem cell transplant versus chemotherapy in leukemia. However, the theoretical survival functions corresponding to the second approach were not clearly defined in the literature, and the use of the 'Simon and Makuch' estimator was criticized in the biostatistical community. Here, we review the 'landmark' approach, showing that it focuses on the average survival of patients conditional on being failure free and on the treatment status assessed at the landmark time. We argue that the 'Simon and Makuch' approach represents counterfactual survival probabilities where treatment status is forced to be fixed: the patient is thought as under chemotherapy without possibility to switch treatment or as under transplant since the beginning of the follow-up. We argue that the 'Simon and Makuch' estimator leads to valid estimates only under the Markov assumption, which is however less likely to occur in practical applications. This motivates the development of a novel approach based on time rescaling, which leads to suitable estimates of the counterfactual probabilities in a semi-Markov process. The method is also extended to deal with a fixed landmark time of interest. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26503800

  2. A robust design mark-resight abundance estimator allowing heterogeneity in resighting probabilities

    USGS Publications Warehouse

    McClintock, B.T.; White, Gary C.; Burnham, K.P.

    2006-01-01

    This article introduces the beta-binomial estimator (BBE), a closed-population abundance mark-resight model combining the favorable qualities of maximum likelihood theory and the allowance of individual heterogeneity in sighting probability (p). The model may be parameterized for a robust sampling design consisting of multiple primary sampling occasions where closure need not be met between primary occasions. We applied the model to brown bear data from three study areas in Alaska and compared its performance to the joint hypergeometric estimator (JHE) and Bowden's estimator (BOWE). BBE estimates suggest heterogeneity levels were non-negligible and discourage the use of JHE for these data. Compared to JHE and BOWE, confidence intervals were considerably shorter for the AICc model-averaged BBE. To evaluate the properties of BBE relative to JHE and BOWE when sample sizes are small, simulations were performed with data from three primary occasions generated under both individual heterogeneity and temporal variation in p. All models remained consistent regardless of levels of variation in p. In terms of precision, the AICc model-averaged BBE showed advantages over JHE and BOWE when heterogeneity was present and mean sighting probabilities were similar between primary occasions. Based on the conditions examined, BBE is a reliable alternative to JHE or BOWE and provides a framework for further advances in mark-resight abundance estimation. ?? 2006 American Statistical Association and the International Biometric Society.

  3. Efficient estimation of contact probabilities from inter-bead distance distributions in simulated polymer chains

    NASA Astrophysics Data System (ADS)

    Meluzzi, Dario; Arya, Gaurav

    2015-02-01

    The estimation of contact probabilities (CP) from conformations of simulated bead-chain polymer models is a key step in methods that aim to elucidate the spatial organization of chromatin from analysis of experimentally determined contacts between different genomic loci. Although CPs can be estimated simply by counting contacts between beads in a sample of simulated chain conformations, reliable estimation of small CPs through this approach requires a large number of conformations, which can be computationally expensive to obtain. Here we describe an alternative computational method for estimating relatively small CPs without requiring large samples of chain conformations. In particular, we estimate the CPs from functional approximations to the cumulative distribution function (cdf) of the inter-bead distance for each pair of beads. These cdf approximations are obtained by fitting the extended generalized lambda distribution (EGLD) to inter-bead distances determined from a sample of chain conformations, which are in turn generated by Monte Carlo simulations. We find that CPs estimated from fitted EGLD cdfs are significantly more accurate than CPs estimated using contact counts from samples of limited size, and are more precise with all sample sizes, permitting as much as a tenfold reduction in conformation sample size for chains of 200 beads and samples smaller than 105 conformations. This method of CP estimation thus has potential to accelerate computational efforts to elucidate the spatial organization of chromatin.

  4. Efficient estimation of contact probabilities from inter-bead distance distributions in simulated polymer chains.

    PubMed

    Meluzzi, Dario; Arya, Gaurav

    2015-02-18

    The estimation of contact probabilities (CP) from conformations of simulated bead-chain polymer models is a key step in methods that aim to elucidate the spatial organization of chromatin from analysis of experimentally determined contacts between different genomic loci. Although CPs can be estimated simply by counting contacts between beads in a sample of simulated chain conformations, reliable estimation of small CPs through this approach requires a large number of conformations, which can be computationally expensive to obtain. Here we describe an alternative computational method for estimating relatively small CPs without requiring large samples of chain conformations. In particular, we estimate the CPs from functional approximations to the cumulative distribution function (cdf) of the inter-bead distance for each pair of beads. These cdf approximations are obtained by fitting the extended generalized lambda distribution (EGLD) to inter-bead distances determined from a sample of chain conformations, which are in turn generated by Monte Carlo simulations. We find that CPs estimated from fitted EGLD cdfs are significantly more accurate than CPs estimated using contact counts from samples of limited size, and are more precise with all sample sizes, permitting as much as a tenfold reduction in conformation sample size for chains of 200 beads and samples smaller than 10(5) conformations. This method of CP estimation thus has potential to accelerate computational efforts to elucidate the spatial organization of chromatin. PMID:25563926

  5. Estimating state-transition probabilities for unobservable states using capture-recapture/resighting data

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.

    2002-01-01

    Temporary emigration was identified some time ago as causing potential problems in capture-recapture studies, and in the last five years approaches have been developed for dealing with special cases of this general problem. Temporary emigration can be viewed more generally as involving transitions to and from an unobservable state, and frequently the state itself is one of biological interest (e.g., 'nonbreeder'). Development of models that permit estimation of relevant parameters in the presence of an unobservable state requires either extra information (e.g., as supplied by Pollock's robust design) or the following classes of model constraints: reducing the order of Markovian transition probabilities, imposing a degree of determinism on transition probabilities, removing state specificity of survival probabilities, and imposing temporal constancy of parameters. The objective of the work described in this paper is to investigate estimability of model parameters under a variety of models that include an unobservable state. Beginning with a very general model and no extra information, we used numerical methods to systematically investigate the use of ancillary information and constraints to yield models that are useful for estimation. The result is a catalog of models for which estimation is possible. An example analysis of sea turtle capture-recapture data under two different models showed similar point estimates but increased precision for the model that incorporated ancillary data (the robust design) when compared to the model with deterministic transitions only. This comparison and the results of our numerical investigation of model structures lead to design suggestions for capture-recapture studies in the presence of an unobservable state.

  6. Calculation of the number of Monte Carlo histories for a planetary protection probability of impact estimation

    NASA Astrophysics Data System (ADS)

    Barengoltz, Jack

    2016-07-01

    Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single

  7. Innovative Meta-Heuristic Approach Application for Parameter Estimation of Probability Distribution Model

    NASA Astrophysics Data System (ADS)

    Lee, T. S.; Yoon, S.; Jeong, C.

    2012-12-01

    The primary purpose of frequency analysis in hydrology is to estimate the magnitude of an event with a given frequency of occurrence. The precision of frequency analysis depends on the selection of an appropriate probability distribution model (PDM) and parameter estimation techniques. A number of PDMs have been developed to describe the probability distribution of the hydrological variables. For each of the developed PDMs, estimated parameters are provided based on alternative estimation techniques, such as the method of moments (MOM), probability weighted moments (PWM), linear function of ranked observations (L-moments), and maximum likelihood (ML). Generally, the results using ML are more reliable than the other methods. However, the ML technique is more laborious than the other methods because an iterative numerical solution, such as the Newton-Raphson method, must be used for the parameter estimation of PDMs. In the meantime, meta-heuristic approaches have been developed to solve various engineering optimization problems (e.g., linear and stochastic, dynamic, nonlinear). These approaches include genetic algorithms, ant colony optimization, simulated annealing, tabu searches, and evolutionary computation methods. Meta-heuristic approaches use a stochastic random search instead of a gradient search so that intricate derivative information is unnecessary. Therefore, the meta-heuristic approaches have been shown to be a useful strategy to solve optimization problems in hydrology. A number of studies focus on using meta-heuristic approaches for estimation of hydrological variables with parameter estimation of PDMs. Applied meta-heuristic approaches offer reliable solutions but use more computation time than derivative-based methods. Therefore, the purpose of this study is to enhance the meta-heuristic approach for the parameter estimation of PDMs by using a recently developed algorithm known as a harmony search (HS). The performance of the HS is compared to the

  8. An estimate of the probability of capture of a binary star by a supermassive black hole

    NASA Astrophysics Data System (ADS)

    Dremova, G. N.; Dremov, V. V.; Tutukov, A. V.

    2016-08-01

    A simple model for the dynamics of stars located in a sphere with a radius of one-tenth of the central parsec, designed to enable estimation of the probability of capture in the close vicinity ( r < 10-3 pc) of a supermassive black hole (SMBH) is presented. In the case of binary stars, such a capture with a high probability results in the formation of a hyper-velocity star. The population of stars in a sphere of radius <0.1 pc is calculated based on data for the Galactic rotation curve. To simulate the distortion of initially circular orbits of stars, these are subjected to a series of random shock encounters ("kicks"), whose net effect is to "push" these binary systems into the region of potential formation of hyper-velocity stars. The mean crossing time of the border of the close vicinity of the SMBH ( r < 10-3 pc) by the stellar orbit can be used to estimate the probability that a binary system is captured, followed by the possible ejection of a hyper-velocity star.

  9. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways. PMID:22576139

  10. Empirical comparison of uniform and non-uniform probability sampling for estimating numbers of red-cockaded woodpecker colonies

    USGS Publications Warehouse

    Geissler, P.H.; Moyer, L.M.

    1983-01-01

    Four sampling and estimation methods for estimating the number of red-cockaded woodpecker colonies on National Forests in the Southeast were compared, using samples chosen from simulated populations based on the observed sample. The methods included (1) simple random sampling without replacement using a mean per sampling unit estimator, (2) simple random sampling without replacement with a ratio per pine area estimator, (3) probability proportional to 'size' sampling with replacement, and (4) probability proportional to 'size' without replacement using Murthy's estimator. The survey sample of 274 National Forest compartments (1000 acres each) constituted a superpopulation from which simulated stratum populations were selected with probability inversely proportional to the original probability of selection. Compartments were originally sampled with probabilities proportional to the probabilities that the compartments contained woodpeckers ('size'). These probabilities were estimated with a discriminant analysis based on tree species and tree age. The ratio estimator would have been the best estimator for this survey based on the mean square error. However, if more accurate predictions of woodpecker presence had been available, Murthy's estimator would have been the best. A subroutine to calculate Murthy's estimates is included; it is computationally feasible to analyze up to 10 samples per stratum.

  11. Dynamic Estimation of the Probability of Patient Readmission to the ICU using Electronic Medical Records

    PubMed Central

    Caballero, Karla; Akella, Ram

    2015-01-01

    In this paper, we propose a framework to dynamically estimate the probability that a patient is readmitted after he is discharged from the ICU and transferred to a lower level care. We model this probability as a latent state which evolves over time using Dynamical Linear Models (DLM). We use as an input a combination of numerical and text features obtained from the patient Electronic Medical Records (EMRs). We process the text from the EMRs to capture different diseases, symptoms and treatments by means of noun phrases and ontologies. We also capture the global context of each text entry using Statistical Topic Models. We fill out the missing values using a Expectation Maximization based method (EM). Experimental results show that our method outperforms other methods in the literature terms of AUC, sensitivity and specificity. In addition, we show that the combination of different features (numerical and text) increases the prediction performance of the proposed approach. PMID:26958282

  12. Estimating survival and breeding probability for pond-breeding amphibians: a modified robust design

    USGS Publications Warehouse

    Bailey, L.L.; Kendall, W.L.; Church, D.R.; Wilbur, H.M.

    2004-01-01

    Many studies of pond-breeding amphibians involve sampling individuals during migration to and from breeding habitats. Interpreting population processes and dynamics from these studies is difficult because (1) only a proportion of the population is observable each season, while an unknown proportion remains unobservable (e.g., non-breeding adults) and (2) not all observable animals are captured. Imperfect capture probability can be easily accommodated in capture?recapture models, but temporary transitions between observable and unobservable states, often referred to as temporary emigration, is known to cause problems in both open- and closed-population models. We develop a multistate mark?recapture (MSMR) model, using an open-robust design that permits one entry and one exit from the study area per season. Our method extends previous temporary emigration models (MSMR with an unobservable state) in two ways. First, we relax the assumption of demographic closure (no mortality) between consecutive (secondary) samples, allowing estimation of within-pond survival. Also, we add the flexibility to express survival probability of unobservable individuals (e.g., ?non-breeders?) as a function of the survival probability of observable animals while in the same, terrestrial habitat. This allows for potentially different annual survival probabilities for observable and unobservable animals. We apply our model to a relictual population of eastern tiger salamanders (Ambystoma tigrinum tigrinum). Despite small sample sizes, demographic parameters were estimated with reasonable precision. We tested several a priori biological hypotheses and found evidence for seasonal differences in pond survival. Our methods could be applied to a variety of pond-breeding species and other taxa where individuals are captured entering or exiting a common area (e.g., spawning or roosting area, hibernacula).

  13. DROPOUT AND RETENTION RATE METHODOLOGY USED TO ESTIMATE FIRST-STAGE ELEMENTS OF THE TRANSITION PROBABILITY MATRICES FOR DYNAMOD II.

    ERIC Educational Resources Information Center

    HUDMAN, JOHN T.; ZABROWSKI, EDWARD K.

    EQUATIONS FOR SYSTEM INTAKE, DROPOUT, AND RETENTION RATE CALCULATIONS ARE DERIVED FOR ELEMENTARY SCHOOLS, SECONDARY SCHOOLS, AND COLLEGES. THE PROCEDURES DESCRIBED WERE FOLLOWED IN DEVELOPING ESTIMATES OF SELECTED ELEMENTS OF THE TRANSITION PROBABILITY MATRICES USED IN DYNAMOD II. THE PROBABILITY MATRIX CELLS ESTIMATED BY THE PROCEDURES DESCRIBED…

  14. Probability density function estimation for characterizing hourly variability of ionospheric total electron content

    NASA Astrophysics Data System (ADS)

    Turel, N.; Arikan, F.

    2010-12-01

    Ionospheric channel characterization is an important task for both HF and satellite communications. The inherent space-time variability of the ionosphere can be observed through total electron content (TEC) that can be obtained using GPS receivers. In this study, within-the-hour variability of the ionosphere over high-latitude, midlatitude, and equatorial regions is investigated by estimating a parametric model for the probability density function (PDF) of GPS-TEC. PDF is a useful tool in defining the statistical structure of communication channels. For this study, a half solar cycle data is collected for 18 GPS stations. Histograms of TEC, corresponding to experimental probability distributions, are used to estimate the parameters of five different PDFs. The best fitting distribution to the TEC data is obtained using the maximum likelihood ratio of the estimated parametric distributions. It is observed that all of the midlatitude stations and most of the high-latitude and equatorial stations are distributed as lognormal. A representative distribution can easily be obtained for stations that are located in midlatitude using solar zenith normalization. The stations located in very high latitudes or in equatorial regions cannot be described using only one PDF distribution. Due to significant seasonal variability, different distributions are required for summer and winter.

  15. Estimating superpopulation size and annual probability of breeding for pond-breeding salamanders

    USGS Publications Warehouse

    Kinkead, K.E.; Otis, D.L.

    2007-01-01

    It has long been accepted that amphibians can skip breeding in any given year, and environmental conditions act as a cue for breeding. In this paper, we quantify temporary emigration or nonbreeding probability for mole and spotted salamanders (Ambystoma talpoideum and A. maculatum). We estimated that 70% of mole salamanders may skip breeding during an average rainfall year and 90% may skip during a drought year. Spotted salamanders may be more likely to breed, with only 17% avoiding the breeding pond during an average rainfall year. We illustrate how superpopulations can be estimated using temporary emigration probability estimates. The superpopulation is the total number of salamanders associated with a given breeding pond. Although most salamanders stay within a certain distance of a breeding pond for the majority of their life spans, it is difficult to determine true overall population sizes for a given site if animals are only captured during a brief time frame each year with some animals unavailable for capture at any time during a given year. ?? 2007 by The Herpetologists' League, Inc.

  16. A logistic regression equation for estimating the probability of a stream in Vermont having intermittent flow

    USGS Publications Warehouse

    Olson, Scott A.; Brouillette, Michael C.

    2006-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.

  17. Estimating the probability of emigration from individual-specific data: the case of Italy in the early twentieth century.

    PubMed

    Mondschean, T H

    1986-01-01

    "This article develops a method for estimating the probability of emigration conditional on the observed characteristics of individuals. In addition, it is shown how to calculate the mean, standard error, and confidence intervals of the conditional probability of emigration given a random sample of emigrants. The technique is illustrated by providing statistically consistent estimates of the probability an Italian would emigrate to the United States in 1901 and 1911, conditional on personal attributes." PMID:12340643

  18. Estimation of Genotype Distributions and Posterior Genotype Probabilities for β-Mannosidosis in Salers Cattle

    PubMed Central

    Taylor, J. F.; Abbitt, B.; Walter, J. P.; Davis, S. K.; Jaques, J. T.; Ochoa, R. F.

    1993-01-01

    β-Mannosidosis is a lethal lysosomal storage disease inherited as an autosomal recessive in man, cattle and goats. Laboratory assay data of plasma β-mannosidase activity represent a mixture of homozygous normal and carrier genotype distributions in a proportion determined by genotype frequency. A maximum likelihood approach employing data transformations for each genotype distribution and assuming a diallelic model of inheritance is described. Estimates of the transformation and genotype distribution parameters, gene frequency, genotype fitness and carrier probability were obtained simultaneously from a sample of 2,812 observations on U.S. purebred Salers cattle with enzyme activity, age, gender, month of pregnancy, month of testing, and parents identified. Transformations to normality were not required, estimated gene and carrier genotype frequencies of 0.074 and 0.148 were high, and the estimated relative fitness of heterozygotes was 1.36. The apparent overdominance in fitness may be due to a nonrandom sampling of progeny genotypes within families. The mean of plasma enzyme activity was higher for males than females, higher in winter months, lower in summer months and decreased with increased age. Estimates of carrier probabilities indicate that the test is most effective when animals are sampled as calves, although effectiveness of the plasma assay was less for males than females. Test effectiveness was enhanced through averaging repeated assays of enzyme activity on each animal. Our approach contributes to medical diagnostics in several ways. Rather than assume underlying normality for the distributions comprising the mixture, we estimate transformations to normality for each genotype distribution simultaneously with all other model parameters. This process also excludes potential biases due to data preadjustment for systematic effects. We also provide a method for partitioning phenotypic variation within each genotypic distribution which allows an assessment

  19. Application of Monte Carlo simulation to estimate probabilities for the best and health conservative estimates of receptor well concentrations

    SciTech Connect

    Not Available

    1989-08-01

    This report presents a Monte Carlo Simulation analysis of the fate and transport of Contaminants in groundwater at the Lawrence Livermore National Laboratory Livermore Site. The result of this analysis are the cumulative distribution function (CDF) of the maximum 70-year average and peak concentrations of the four chemicals of concern (TCE, PCE, chloroform, and other VOCs'') at the near-field and three far-field wells. These concentration CDFs can be used to estimate the probability of occurrence of the concentrations previously predicted using the deterministic model, and to conduct an enhanced exposure and risk assessment for the Remedial Investigation and Feasibility Study (RI/FS). This report provides a description of the deterministic fate and transport model (PLUME) which was linked to the Monte Carlo Shell to estimate the CDF of the receptor-well chemical concentrations. 6 refs., 21 figs., 12 tabs.

  20. Estimating the probability of demonstrating vaccine efficacy in the declining Ebola epidemic: a Bayesian modelling approach

    PubMed Central

    Funk, Sebastian; Watson, Conall H; Kucharski, Adam J; Edmunds, W John

    2015-01-01

    Objectives We investigate the chance of demonstrating Ebola vaccine efficacy in an individually randomised controlled trial implemented in the declining epidemic of Forécariah prefecture, Guinea. Methods We extend a previously published dynamic transmission model to include a simulated individually randomised controlled trial of 100 000 participants. Using Bayesian methods, we fit the model to Ebola case incidence before a trial and forecast the expected dynamics until disease elimination. We simulate trials under these forecasts and test potential start dates and rollout schemes to assess power to detect efficacy, and bias in vaccine efficacy estimates that may be introduced. Results Under realistic assumptions, we found that a trial of 100 000 participants starting after 1 August had less than 5% chance of having enough cases to detect vaccine efficacy. In particular, gradual recruitment precludes detection of vaccine efficacy because the epidemic is likely to go extinct before enough participants are recruited. Exclusion of early cases in either arm of the trial creates bias in vaccine efficacy estimates. Conclusions The very low Ebola virus disease incidence in Forécariah prefecture means any individually randomised controlled trial implemented there is unlikely to be successful, unless there is a substantial increase in the number of cases. PMID:26671958

  1. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  2. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  3. Estimated Probability of a Cervical Spine Injury During an ISS Mission

    NASA Technical Reports Server (NTRS)

    Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.

    2013-01-01

    Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a

  4. The estimation of neurotransmitter release probability in feedforward neuronal network based on adaptive synchronization

    NASA Astrophysics Data System (ADS)

    Xue, Ming; Wang, Jiang; Jia, Chenhui; Yu, Haitao; Deng, Bin; Wei, Xile; Che, Yanqiu

    2013-03-01

    In this paper, we proposed a new approach to estimate unknown parameters and topology of a neuronal network based on the adaptive synchronization control scheme. A virtual neuronal network is constructed as an observer to track the membrane potential of the corresponding neurons in the original network. When they achieve synchronization, the unknown parameters and topology of the original network are obtained. The method is applied to estimate the real-time status of the connection in the feedforward network and the neurotransmitter release probability of unreliable synapses is obtained by statistic computation. Numerical simulations are also performed to demonstrate the effectiveness of the proposed adaptive controller. The obtained results may have important implications in system identification in neural science.

  5. ANNz2 - Photometric redshift and probability density function estimation using machine-learning

    NASA Astrophysics Data System (ADS)

    Sadeh, Iftach

    2014-05-01

    Large photometric galaxy surveys allow the study of questions at the forefront of science, such as the nature of dark energy. The success of such surveys depends on the ability to measure the photometric redshifts of objects (photo-zs), based on limited spectral data. A new major version of the public photo-z estimation software, ANNz , is presented here. The new code incorporates several machine-learning methods, such as artificial neural networks and boosted decision/regression trees, which are all used in concert. The objective of the algorithm is to dynamically optimize the performance of the photo-z estimation, and to properly derive the associated uncertainties. In addition to single-value solutions, the new code also generates full probability density functions in two independent ways.

  6. A method for estimating the probability of lightning causing a methane ignition in an underground mine

    SciTech Connect

    Sacks, H.K.; Novak, T.

    2008-03-15

    During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controls to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.

  7. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, Emmanouil

    2016-04-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a innovative statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use, accurate and can be applied to any region and river. Varouchakis, E. A., Giannakis, G. V., Lilli, M. A., Ioannidou, E., Nikolaidis, N. P., and Karatzas, G. P.: Development of a statistical tool for the estimation of riverbank erosion probability, SOIL (EGU), in print, 2016.

  8. Toward 3D-guided prostate biopsy target optimization: an estimation of tumor sampling probabilities

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the ~23% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still yields false negatives. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. We obtained multiparametric MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy. Given an RMS needle delivery error of 3.5 mm for a contemporary fusion biopsy system, P >= 95% for 21 out of 81 tumors when the point of optimal sampling probability was targeted. Therefore, more than one biopsy core must be taken from 74% of the tumors to achieve P >= 95% for a biopsy system with an error of 3.5 mm. Our experiments indicated that the effect of error along the needle axis on the percentage of core involvement (and thus the measured tumor burden) was mitigated by the 18 mm core length.

  9. Estimating the probability of an extinction or major outbreak for an environmentally transmitted infectious disease.

    PubMed

    Lahodny, G E; Gautam, R; Ivanek, R

    2015-01-01

    Indirect transmission through the environment, pathogen shedding by infectious hosts, replication of free-living pathogens within the environment, and environmental decontamination are suspected to play important roles in the spread and control of environmentally transmitted infectious diseases. To account for these factors, the classic Susceptible-Infectious-Recovered-Susceptible epidemic model is modified to include a compartment representing the amount of free-living pathogen within the environment. The model accounts for host demography, direct and indirect transmission, replication of free-living pathogens in the environment, and removal of free-living pathogens by natural death or environmental decontamination. Based on the assumptions of the deterministic model, a continuous-time Markov chain model is developed. An estimate for the probability of disease extinction or a major outbreak is obtained by approximating the Markov chain with a multitype branching process. Numerical simulations illustrate important differences between the deterministic and stochastic counterparts, relevant for outbreak prevention, that depend on indirect transmission, pathogen shedding by infectious hosts, replication of free-living pathogens, and environmental decontamination. The probability of a major outbreak is computed for salmonellosis in a herd of dairy cattle as well as cholera in a human population. An explicit expression for the probability of disease extinction or a major outbreak in terms of the model parameters is obtained for systems with no direct transmission or replication of free-living pathogens. PMID:25198247

  10. Estimation of (n,f) Cross-Sections by Measuring Reaction Probability Ratios

    SciTech Connect

    Plettner, C; Ai, H; Beausang, C W; Bernstein, L A; Ahle, L; Amro, H; Babilon, M; Burke, J T; Caggiano, J A; Casten, R F; Church, J A; Cooper, J R; Crider, B; Gurdal, G; Heinz, A; McCutchan, E A; Moody, K; Punyon, J A; Qian, J; Ressler, J J; Schiller, A; Williams, E; Younes, W

    2005-04-21

    Neutron-induced reaction cross-sections on unstable nuclei are inherently difficult to measure due to target activity and the low intensity of neutron beams. In an alternative approach, named the 'surrogate' technique, one measures the decay probability of the same compound nucleus produced using a stable beam on a stable target to estimate the neutron-induced reaction cross-section. As an extension of the surrogate method, in this paper they introduce a new technique of measuring the fission probabilities of two different compound nuclei as a ratio, which has the advantage of removing most of the systematic uncertainties. This method was benchmarked in this report by measuring the probability of deuteron-induced fission events in coincidence with protons, and forming the ratio P({sup 236}U(d,pf))/P({sup 238}U(d,pf)), which serves as a surrogate for the known cross-section ratio of {sup 236}U(n,f)/{sup 238}U(n,f). IN addition, the P({sup 238}U(d,d{prime}f))/P({sup 236}U(d,d{prime}f)) ratio as a surrogate for the {sup 237}U(n,f)/{sup 235}U(n,f) cross-section ratio was measured for the first time in an unprecedented range of excitation energies.

  11. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2010-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish .

  12. Inconsistent probability estimates of a hypothesis: the role of contrasting support.

    PubMed

    Bonini, Nicolao; Gonzalez, Michel

    2005-01-01

    This paper studies consistency in the judged probability of a target hypothesis in lists of mutually exclusive nonexhaustive hypotheses. Specifically, it controls the role played by the support of displayed competing hypotheses and the relatedness between the target hypothesis and its alternatives. Three experiments are reported. In all experiments, groups of people were presented with a list of mutually exclusive nonexhaustive causes of a person's death. In the first two experiments, they were asked to judge the probability of each cause as that of the person's decease. In the third experiment, people were asked for a frequency estimation task. Target causes were presented in all lists. Several other alternative causes to the target ones differed across the lists. Findings show that the judged probability/frequency of a target cause changes as a function of the support of the displayed competing causes. Specifically, it is higher when its competing displayed causes have low rather than high support. Findings are consistent with the contrastive support hypothesis within the support theory. PMID:15779531

  13. Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage.

  14. Estimates of density, detection probability, and factors influencing detection of burrowing owls in the Mojave Desert

    USGS Publications Warehouse

    Crowe, D.E.; Longshore, K.M.

    2010-01-01

    We estimated relative abundance and density of Western Burrowing Owls (Athene cunicularia hypugaea) at two sites in the Mojave Desert (200304). We made modifications to previously established Burrowing Owl survey techniques for use in desert shrublands and evaluated several factors that might influence the detection of owls. We tested the effectiveness of the call-broadcast technique for surveying this species, the efficiency of this technique at early and late breeding stages, and the effectiveness of various numbers of vocalization intervals during broadcasting sessions. Only 1 (3) of 31 initial (new) owl responses was detected during passive-listening sessions. We found that surveying early in the nesting season was more likely to produce new owl detections compared to surveying later in the nesting season. New owls detected during each of the three vocalization intervals (each consisting of 30 sec of vocalizations followed by 30 sec of silence) of our broadcasting session were similar (37, 40, and 23; n 30). We used a combination of detection trials (sighting probability) and double-observer method to estimate the components of detection probability, i.e., availability and perception. Availability for all sites and years, as determined by detection trials, ranged from 46.158.2. Relative abundance, measured as frequency of occurrence and defined as the proportion of surveys with at least one owl, ranged from 19.232.0 for both sites and years. Density at our eastern Mojave Desert site was estimated at 0.09 ?? 0.01 (SE) owl territories/km2 and 0.16 ?? 0.02 (SE) owl territories/km2 during 2003 and 2004, respectively. In our southern Mojave Desert site, density estimates were 0.09 ?? 0.02 (SE) owl territories/km2 and 0.08 ?? 0.02 (SE) owl territories/km 2 during 2004 and 2005, respectively. ?? 2010 The Raptor Research Foundation, Inc.

  15. Fast method for the estimation of impact probability of near-Earth objects

    NASA Astrophysics Data System (ADS)

    Vavilov, D.; Medvedev, Y.

    2014-07-01

    We propose a method to estimate the probability of collision of a celestial body with the Earth (or another major planet) at a given time moment t. Let there be a set of observations of a small body. At initial time moment T_0, a nominal orbit is defined by the least squares method. In our method, a unique coordinate system is used. It is supposed that errors of observations are related to errors of coordinates and velocities linearly and the distribution law of observation errors is normal. The unique frame is defined as follows. First of all, we fix an osculating ellipse of the body's orbit at the time moment t. The mean anomaly M in this osculating ellipse is a coordinate of the introduced system. The spatial coordinate ξ is perpendicular to the plane which contains the fixed ellipse. η is a spatial coordinate, too, and our axes satisfy the right-hand rule. The origin of ξ and η corresponds to the given M point on the ellipse. The components of the velocity are the corresponding derivatives of dotξ, dotη, dot{M}. To calculate the probability of collision, we numerically integrate equations of an asteroid's motion taking into account perturbations and calculate a normal matrix N. The probability is determinated as follows: P = {|detN|^{ {1}/{2} }}/{ (2π)^3 } int_Ω e^{ - {1}/{2} x^T N x } dx where x denotes a six-dimensional vector of coordinates and velocities, Ω is the region which is occupied by the Earth, and the superscript T denotes the matrix transpose operation. To take into account a gravitational attraction of the Earth, the radius of the Earth is increased by √{1 + {v_s^2}/{v_{rel}^2} } times, where v_s is the escape velocity and v_{rel} is the small body's velocity relative to the Earth. The 6-dimensional integral is analytically integrated over the velocity components on (-∞,+∞). After that we have the 3×3 matrix N_1. That 6-dimensional integral becomes a 3-dimensional integral. To calculate it quickly we do the following. We introduce

  16. Probability Density Estimation Using Isocontours and Isosurfaces: Application to Information-Theoretic Image Registration

    PubMed Central

    Rajwade, Ajit; Banerjee, Arunava; Rangarajan, Anand

    2010-01-01

    We present a new geometric approach for determining the probability density of the intensity values in an image. We drop the notion of an image as a set of discrete pixels and assume a piecewise-continuous representation. The probability density can then be regarded as being proportional to the area between two nearby isocontours of the image surface. Our paper extends this idea to joint densities of image pairs. We demonstrate the application of our method to affine registration between two or more images using information-theoretic measures such as mutual information. We show cases where our method outperforms existing methods such as simple histograms, histograms with partial volume interpolation, Parzen windows, etc., under fine intensity quantization for affine image registration under significant image noise. Furthermore, we demonstrate results on simultaneous registration of multiple images, as well as for pairs of volume data sets, and show some theoretical properties of our density estimator. Our approach requires the selection of only an image interpolant. The method neither requires any kind of kernel functions (as in Parzen windows), which are unrelated to the structure of the image in itself, nor does it rely on any form of sampling for density estimation. PMID:19147876

  17. Probability estimation with machine learning methods for dichotomous and multicategory outcome: applications.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Diener, Hans-Christian; Holste, Theresa; Weimar, Christian; König, Inke R; Ziegler, Andreas

    2014-07-01

    Machine learning methods are applied to three different large datasets, all dealing with probability estimation problems for dichotomous or multicategory data. Specifically, we investigate k-nearest neighbors, bagged nearest neighbors, random forests for probability estimation trees, and support vector machines with the kernels of Bessel, linear, Laplacian, and radial basis type. Comparisons are made with logistic regression. The dataset from the German Stroke Study Collaboration with dichotomous and three-category outcome variables allows, in particular, for temporal and external validation. The other two datasets are freely available from the UCI learning repository and provide dichotomous outcome variables. One of them, the Cleveland Clinic Foundation Heart Disease dataset, uses data from one clinic for training and from three clinics for external validation, while the other, the thyroid disease dataset, allows for temporal validation by separating data into training and test data by date of recruitment into study. For dichotomous outcome variables, we use receiver operating characteristics, areas under the curve values with bootstrapped 95% confidence intervals, and Hosmer-Lemeshow-type figures as comparison criteria. For dichotomous and multicategory outcomes, we calculated bootstrap Brier scores with 95% confidence intervals and also compared them through bootstrapping. In a supplement, we provide R code for performing the analyses and for random forest analyses in Random Jungle, version 2.1.0. The learning machines show promising performance over all constructed models. They are simple to apply and serve as an alternative approach to logistic or multinomial logistic regression analysis. PMID:24989843

  18. Estimation of drought transition probabilities in Sicily making use of exogenous variables

    NASA Astrophysics Data System (ADS)

    Bonaccorso, Brunella; di Mauro, Giuseppe; Cancelliere, Antonino; Rossi, Giuseppe

    2010-05-01

    Drought monitoring and forecasting play a very important role for an effective drought management. A timely monitoring of drought features and/or forecasting of an incoming drought do make possible an effective mitigation of its impacts, more than in the case of other natural disasters (e.g. floods, earthquakes, hurricanes, etc.). An accurate selection of indices, able to monitor the main characteristics of droughts, is essential to help decision makers to implement appropriate preparedness and mitigation measures. Among the several proposed indices for drought monitoring, the Standardized Precipitation Index (SPI) has found widespread use to monitor dry and wet periods of precipitation aggregated at different time scales. Recently, some efforts have been made to analyze the role of SPI for drought forecasting, as well as to estimate transition probabilities between drought classes. In the present work, a model able to estimate transition probabilities from a current SPI drought class or from a current SPI value to future classes, corresponding to droughts of different severities, is presented and extended in order to include information provided by an exogenous variable, such as a large scale climatic index as the North Atlantic Oscillation Index (NAO). The model has been preliminarily applied and tested with reference to SPI series computed on average areal precipitation in Sicily island, Italy, making use of NAO as exogenous variable. Results seem to indicate that winter drought transition probabilities in Sicily are generally affected by NAO index. Furthermore, the statistical significance of such influence has been tested by means of a Montecarlo analysis, which indicates that the effect of NAO on drought transition in Sicily should be considered significant.

  19. Estimating Probabilities of Peptide Database Identifications to LC-FTICR-MS Observations

    SciTech Connect

    Anderson, Kevin K.; Monroe, Matthew E.; Daly, Don S.

    2006-02-24

    One of the grand challenges in the post-genomic era is proteomics, the characterization of the proteins expressed in a cell under specific conditions. A promising technology for high-throughput proteomics is mass spectrometry, specifically liquid chromatography coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR-MS). The accuracy and certainty of the determinations of peptide identities and abundances provided by LC-FTICR-MS are an important and necessary component of systems biology research. Methods: After a tryptically digested protein mixture is analyzed by LC-FTICR-MS, the observed masses and normalized elution times of the detected features are statistically matched to the theoretical masses and elution times of known peptides listed in a large database. The probability of matching is estimated for each peptide in the reference database using statistical classification methods assuming bivariate Gaussian probability distributions on the uncertainties in the masses and the normalized elution times. A database of 69,220 features from 32 LC-FTICR-MS analyses of a tryptically digested bovine serum albumin (BSA) sample was matched to a database populated with 97% false positive peptides. The percentage of high confidence identifications was found to be consistent with other database search procedures. BSA database peptides were identified with high confidence on average in 14.1 of the 32 analyses. False positives were identified on average in just 2.7 analyses. Using a priori probabilities that contrast peptides from expected and unexpected proteins was shown to perform better in identifying target peptides than using equally likely a priori probabilities. This is because a large percentage of the target peptides were similar to unexpected peptides which were included to be false positives. The use of triplicate analyses with a ''2 out of 3'' reporting rule was shown to have excellent rejection of false positives.

  20. Fast and accurate probability density estimation in large high dimensional astronomical datasets

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2015-01-01

    Astronomical surveys will generate measurements of hundreds of attributes (e.g. color, size, shape) on hundreds of millions of sources. Analyzing these large, high dimensional data sets will require efficient algorithms for data analysis. An example of this is probability density estimation that is at the heart of many classification problems such as the separation of stars and quasars based on their colors. Popular density estimation techniques use binning or kernel density estimation. Kernel density estimation has a small memory footprint but often requires large computational resources. Binning has small computational requirements but usually binning is implemented with multi-dimensional arrays which leads to memory requirements which scale exponentially with the number of dimensions. Hence both techniques do not scale well to large data sets in high dimensions. We present an alternative approach of binning implemented with hash tables (BASH tables). This approach uses the sparseness of data in the high dimensional space to ensure that the memory requirements are small. However hashing requires some extra computation so a priori it is not clear if the reduction in memory requirements will lead to increased computational requirements. Through an implementation of BASH tables in C++ we show that the additional computational requirements of hashing are negligible. Hence this approach has small memory and computational requirements. We apply our density estimation technique to photometric selection of quasars using non-parametric Bayesian classification and show that the accuracy of the classification is same as the accuracy of earlier approaches. Since the BASH table approach is one to three orders of magnitude faster than the earlier approaches it may be useful in various other applications of density estimation in astrostatistics.

  1. Accretion of Fine Particles: Sticking Probability Estimated by Optical Sizing of Fractal Aggregates

    NASA Astrophysics Data System (ADS)

    Sugiura, N.; Higuchi, Y.

    1993-07-01

    Sticking probability of fine particles is an important parameter that determines (1) the settling of fine particles to the equatorial plane of the solar nebula and hence the formation of planetesimals, and (2) the thermal structure of the nebula, which is dependent on the particle size through opacity. It is generally agreed that the sticking probability is 1 for submicrometer particles, but at sizes larger than 1 micrometer, there exist almost no data on the sticking probability. A recent study [1] showed that aggregates (with radius from 0.2 to 2 mm) did not stick when collided at a speed of 0.15 to 4 m/s. Therefore, somewhere between 1 micrometer and 200 micrometers, sticking probabilities of fine particles change from nearly 1 to nearly 0. We have been studying [2,3] sticking probabilities of dust aggregates in this size range using an optical sizing method. The optical sizing method has been well established for spherical particles. This method utilizes the fact that the smaller the size, the larger the angle of the scattered light. For spheres with various sizes, the size distribution is determined by solving Y(i) = M(i,j)X(j), where Y(i) is the scattered light intensity at angle i, X(j) is the number density of spheres with size j, and M(i,j) is the scattering matrix, which is determined by Mie theory. Dust aggregates, which we expect to be present in the early solar nebula, are not solid spheres, but probably have a porous fractal structure. For such aggregates the scattering matrix M(i,j) must be determined by taking account of all the interaction among constituent particles (discrete dipole approximation). Such calculation is possible only for very small aggregates, and for larger aggregates we estimate the scattering matrix by extrapolation, assuming that the fractal nature of the aggregates allows such extrapolation. In the experiments using magnesium oxide fine particles floating in a chamber at ambient pressure, the size distribution (determined by

  2. SAR amplitude probability density function estimation based on a generalized Gaussian model.

    PubMed

    Moser, Gabriele; Zerubia, Josiane; Serpico, Sebastiano B

    2006-06-01

    In the context of remotely sensed data analysis, an important problem is the development of accurate models for the statistics of the pixel intensities. Focusing on synthetic aperture radar (SAR) data, this modeling process turns out to be a crucial task, for instance, for classification or for denoising purposes. In this paper, an innovative parametric estimation methodology for SAR amplitude data is proposed that adopts a generalized Gaussian (GG) model for the complex SAR backscattered signal. A closed-form expression for the corresponding amplitude probability density function (PDF) is derived and a specific parameter estimation algorithm is developed in order to deal with the proposed model. Specifically, the recently proposed "method-of-log-cumulants" (MoLC) is applied, which stems from the adoption of the Mellin transform (instead of the usual Fourier transform) in the computation of characteristic functions and from the corresponding generalization of the concepts of moment and cumulant. For the developed GG-based amplitude model, the resulting MoLC estimates turn out to be numerically feasible and are also analytically proved to be consistent. The proposed parametric approach was validated by using several real ERS-1, XSAR, E-SAR, and NASA/JPL airborne SAR images, and the experimental results prove that the method models the amplitude PDF better than several previously proposed parametric models for backscattering phenomena. PMID:16764268

  3. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  4. Comparison of disjunctive kriging to generalized probability kriging in application to the estimation of simulated and real data

    SciTech Connect

    Carr, J.R. . Dept. of Geological Sciences); Mao, Nai-hsien )

    1992-01-01

    Disjunctive kriging has been compared previously to multigaussian kriging and indicator cokriging for estimation of cumulative distribution functions; it has yet to be compared extensively to probability kriging. Herein, disjunctive kriging and generalized probability kriging are applied to one real and one simulated data set and compared for estimation of the cumulative distribution functions. Generalized probability kriging is an extension, based on generalized cokriging theory, of simple probability kriging for the estimation of the indicator and uniform transforms at each cutoff, Z{sub k}. The disjunctive kriging and the generalized probability kriging give similar results for simulated data of normal distribution, but differ considerably for real data set with non-normal distribution.

  5. Estimating species occurrence, abundance, and detection probability using zero-inflated distributions

    USGS Publications Warehouse

    Wenger, S.J.; Freeman, Mary C.

    2008-01-01

    Researchers have developed methods to account for imperfect detection of species with either occupancy (presence-absence) or count data using replicated sampling. We show how these approaches can be combined to simultaneously estimate occurrence, abundance, and detection probability by specifying a zero-inflated distribution for abundance. This approach may be particularly appropriate when patterns of occurrence and abundance arise from distinct processes operating at differing spatial or temporal scales. We apply the model to two data sets: (1) previously published data for a species of duck, Anas platyrhynchos, and (2) data for a stream fish species, Etheostoma scotti. We show that in these cases, an incomplete-detection zero-inflated modeling approach yields a superior fit to the data than other models. We propose that zero-inflated abundance models accounting for incomplete detection be considered when replicate count data are available.

  6. EROS --- automated software system for ephemeris calculation and estimation of probability domain (Abstract)

    NASA Astrophysics Data System (ADS)

    Skripnichenko, P.; Galushina, T.; Loginova, M.

    2015-08-01

    This work is devoted to the description of the software EROS (Ephemeris Research and Observation Services), which is being developed both by the astronomy department of Ural Federal University and Tomsk State University. This software provides the ephemeris support for the positional observations. The most interesting feature of the software is an automatization of all the processes preparation for observations from the determination of the night duration to the ephemeris calculation and forming of a program observation schedule. The accuracy of ephemeris calculation mostly depends on initial data precision that defined from errors of observations which used to determination of orbital elements. In the case if object has a small number of observations which spread at short arc of orbit there is a real necessity to calculate not only at nominal orbit but probability domain both. In this paper under review ephemeris we will be understand a field on the celestial sphere which calculated based on the probability domain. Our software EROS has a relevant functional for estimation of review ephemeris. This work contains description of software system and results of the program using.

  7. Estimating the ground-state probability of a quantum simulation with product-state measurements

    NASA Astrophysics Data System (ADS)

    Yoshimura, Bryce; Freericks, James

    2015-10-01

    .One of the goals in quantum simulation is to adiabatically generate the ground state of a complicated Hamiltonian by starting with the ground state of a simple Hamiltonian and slowly evolving the system to the complicated one. If the evolution is adiabatic and the initial and final ground states are connected due to having the same symmetry, then the simulation will be successful. But in most experiments, adiabatic simulation is not possible because it would take too long, and the system has some level of diabatic excitation. In this work, we quantify the extent of the diabatic excitation even if we do not know a priori what the complicated ground state is. Since many quantum simulator platforms, like trapped ions, can measure the probabilities to be in a product state, we describe techniques that can employ these simple measurements to estimate the probability of being in the ground state of the system after the diabatic evolution. These techniques do not require one to know any properties about the Hamiltonian itself, nor to calculate its eigenstate properties. All the information is derived by analyzing the product-state measurements as functions of time.

  8. Estimating Landholders’ Probability of Participating in a Stewardship Program, and the Implications for Spatial Conservation Priorities

    PubMed Central

    Adams, Vanessa M.; Pressey, Robert L.; Stoeckl, Natalie

    2014-01-01

    The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements - conservation covenants and management agreements - based on payment level and proportion of properties required to be managed. We then spatially predicted landholders’ probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation. PMID:24892520

  9. Estimating landholders' probability of participating in a stewardship program, and the implications for spatial conservation priorities.

    PubMed

    Adams, Vanessa M; Pressey, Robert L; Stoeckl, Natalie

    2014-01-01

    The need to integrate social and economic factors into conservation planning has become a focus of academic discussions and has important practical implications for the implementation of conservation areas, both private and public. We conducted a survey in the Daly Catchment, Northern Territory, to inform the design and implementation of a stewardship payment program. We used a choice model to estimate the likely level of participation in two legal arrangements--conservation covenants and management agreements--based on payment level and proportion of properties required to be managed. We then spatially predicted landholders' probability of participating at the resolution of individual properties and incorporated these predictions into conservation planning software to examine the potential for the stewardship program to meet conservation objectives. We found that the properties that were least costly, per unit area, to manage were also the least likely to participate. This highlights a tension between planning for a cost-effective program and planning for a program that targets properties with the highest probability of participation. PMID:24892520

  10. Detection probabilities and site occupancy estimates for amphibians at Okefenokee National Wildlife Refuge

    USGS Publications Warehouse

    Smith, L.L.; Barichivich, W.J.; Staiger, J.S.; Smith, Kimberly G.; Dodd, C.K., Jr.

    2006-01-01

    We conducted an amphibian inventory at Okefenokee National Wildlife Refuge from August 2000 to June 2002 as part of the U.S. Department of the Interior's national Amphibian Research and Monitoring Initiative. Nineteen species of amphibians (15 anurans and 4 caudates) were documented within the Refuge, including one protected species, the Gopher Frog Rana capito. We also collected 1 y of monitoring data for amphibian populations and incorporated the results into the inventory. Detection probabilities and site occupancy estimates for four species, the Pinewoods Treefrog (Hyla femoralis), Pig Frog (Rana grylio), Southern Leopard Frog (R. sphenocephala) and Carpenter Frog (R. virgatipes) are presented here. Detection probabilities observed in this study indicate that spring and summer surveys offer the best opportunity to detect these species in the Refuge. Results of the inventory suggest that substantial changes may have occurred in the amphibian fauna within and adjacent to the swamp. However, monitoring the amphibian community of Okefenokee Swamp will prove difficult because of the logistical challenges associated with a rigorous statistical assessment of status and trends.

  11. Measuring and Modeling Fault Density for Plume-Fault Encounter Probability Estimation

    SciTech Connect

    Jordan, P.D.; Oldenburg, C.M.; Nicot, J.-P.

    2011-05-15

    Emission of carbon dioxide from fossil-fueled power generation stations contributes to global climate change. Storage of this carbon dioxide within the pores of geologic strata (geologic carbon storage) is one approach to mitigating the climate change that would otherwise occur. The large storage volume needed for this mitigation requires injection into brine-filled pore space in reservoir strata overlain by cap rocks. One of the main concerns of storage in such rocks is leakage via faults. In the early stages of site selection, site-specific fault coverages are often not available. This necessitates a method for using available fault data to develop an estimate of the likelihood of injected carbon dioxide encountering and migrating up a fault, primarily due to buoyancy. Fault population statistics provide one of the main inputs to calculate the encounter probability. Previous fault population statistics work is shown to be applicable to areal fault density statistics. This result is applied to a case study in the southern portion of the San Joaquin Basin with the result that the probability of a carbon dioxide plume from a previously planned injection had a 3% chance of encountering a fully seal offsetting fault.

  12. Estimation of the failure probability during EGS stimulation based on borehole data

    NASA Astrophysics Data System (ADS)

    Meller, C.; Kohl, Th.; Gaucher, E.

    2012-04-01

    In recent times the search for alternative sources of energy has been fostered by the scarcity of fossil fuels. With its ability to permanently provide electricity or heat with little emission of CO2, geothermal energy will have an important share in the energy mix of the future. Within Europe, scientists identified many locations with conditions suitable for Enhanced Geothermal System (EGS) projects. In order to provide sufficiently high reservoir permeability, EGS require borehole stimulations prior to installation of power plants (Gérard et al, 2006). Induced seismicity during water injection into reservoirs EGS systems is a factor that currently cannot be predicted nor controlled. Often, people living near EGS projects are frightened by smaller earthquakes occurring during stimulation or injection. As this fear can lead to widespread disapproval of geothermal power plants, it is appreciable to find a way to estimate the probability of fractures to shear when injecting water with a distinct pressure into a geothermal reservoir. This provides knowledge, which enables to predict the mechanical behavior of a reservoir in response to a change in pore pressure conditions. In the present study an approach for estimation of the shearing probability based on statistical analyses of fracture distribution, orientation and clusters, together with their geological properties is proposed. Based on geophysical logs of five wells in Soultz-sous-Forêts, France, and with the help of statistical tools, the Mohr criterion, geological and mineralogical properties of the host rock and the fracture fillings, correlations between the wells are analyzed. This is achieved with the self-written MATLAB-code Fracdens, which enables us to statistically analyze the log files in different ways. With the application of a pore pressure change, the evolution of the critical pressure on the fractures can be determined. A special focus is on the clay fillings of the fractures and how they reduce

  13. Estimating Effect Sizes and Expected Replication Probabilities from GWAS Summary Statistics

    PubMed Central

    Holland, Dominic; Wang, Yunpeng; Thompson, Wesley K.; Schork, Andrew; Chen, Chi-Hua; Lo, Min-Tzu; Witoelar, Aree; Werge, Thomas; O'Donovan, Michael; Andreassen, Ole A.; Dale, Anders M.

    2016-01-01

    Genome-wide Association Studies (GWAS) result in millions of summary statistics (“z-scores”) for single nucleotide polymorphism (SNP) associations with phenotypes. These rich datasets afford deep insights into the nature and extent of genetic contributions to complex phenotypes such as psychiatric disorders, which are understood to have substantial genetic components that arise from very large numbers of SNPs. The complexity of the datasets, however, poses a significant challenge to maximizing their utility. This is reflected in a need for better understanding the landscape of z-scores, as such knowledge would enhance causal SNP and gene discovery, help elucidate mechanistic pathways, and inform future study design. Here we present a parsimonious methodology for modeling effect sizes and replication probabilities, relying only on summary statistics from GWAS substudies, and a scheme allowing for direct empirical validation. We show that modeling z-scores as a mixture of Gaussians is conceptually appropriate, in particular taking into account ubiquitous non-null effects that are likely in the datasets due to weak linkage disequilibrium with causal SNPs. The four-parameter model allows for estimating the degree of polygenicity of the phenotype and predicting the proportion of chip heritability explainable by genome-wide significant SNPs in future studies with larger sample sizes. We apply the model to recent GWAS of schizophrenia (N = 82,315) and putamen volume (N = 12,596), with approximately 9.3 million SNP z-scores in both cases. We show that, over a broad range of z-scores and sample sizes, the model accurately predicts expectation estimates of true effect sizes and replication probabilities in multistage GWAS designs. We assess the degree to which effect sizes are over-estimated when based on linear-regression association coefficients. We estimate the polygenicity of schizophrenia to be 0.037 and the putamen to be 0.001, while the respective sample sizes

  14. Statistical Surrogate Models for Estimating Probability of High-Consequence Climate Change

    NASA Astrophysics Data System (ADS)

    Field, R.; Constantine, P.; Boslough, M.

    2011-12-01

    We have posed the climate change problem in a framework similar to that used in safety engineering, by acknowledging that probabilistic risk assessments focused on low-probability, high-consequence climate events are perhaps more appropriate than studies focused simply on best estimates. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We have developed specialized statistical surrogate models (SSMs) that can be used to make predictions about the tails of the associated probability distributions. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field, that is, a random variable for every fixed location in the atmosphere at all times. The SSM can be calibrated to available spatial and temporal data from existing climate databases, or to a collection of outputs from general circulation models. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework was also developed to provide quantitative measures of confidence, via Bayesian credible intervals, to assess these risks. To illustrate the use of the SSM, we considered two collections of NCAR CCSM 3.0 output data. The first collection corresponds to average December surface temperature for years 1990-1999 based on a collection of 8 different model runs obtained from the Program for Climate Model Diagnosis and Intercomparison (PCMDI). We calibrated the surrogate model to the available model data and make various point predictions. We also analyzed average precipitation rate in June, July, and August over a 54-year period assuming a cyclic Y2K ocean model. We

  15. Estimated probabilities and volumes of postwildfire debris flows, a prewildfire evaluation for the upper Blue River watershed, Summit County, Colorado

    USGS Publications Warehouse

    Elliott, John G.; Flynn, Jennifer L.; Bossong, Clifford R.; Char, Stephen J.

    2011-01-01

    The subwatersheds with the greatest potential postwildfire and postprecipitation hazards are those with both high probabilities of debris-flow occurrence and large estimated volumes of debris-flow material. The high probabilities of postwildfire debris flows, the associated large estimated debris-flow volumes, and the densely populated areas along the creeks and near the outlets of the primary watersheds indicate that Indiana, Pennsylvania, and Spruce Creeks are associated with a relatively high combined debris-flow hazard.

  16. Estimating site occupancy rates for aquatic plants using spatial sub-sampling designs when detection probabilities are less than one

    USGS Publications Warehouse

    Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.

    2011-01-01

    Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.

  17. A logistic regression equation for estimating the probability of a stream flowing perennially in Massachusetts

    USGS Publications Warehouse

    Bent, Gardner C.; Archfield, Stacey A.

    2002-01-01

    A logistic regression equation was developed for estimating the probability of a stream flowing perennially at a specific site in Massachusetts. The equation provides city and town conservation commissions and the Massachusetts Department of Environmental Protection with an additional method for assessing whether streams are perennial or intermittent at a specific site in Massachusetts. This information is needed to assist these environmental agencies, who administer the Commonwealth of Massachusetts Rivers Protection Act of 1996, which establishes a 200-foot-wide protected riverfront area extending along the length of each side of the stream from the mean annual high-water line along each side of perennial streams, with exceptions in some urban areas. The equation was developed by relating the verified perennial or intermittent status of a stream site to selected basin characteristics of naturally flowing streams (no regulation by dams, surface-water withdrawals, ground-water withdrawals, diversion, waste-water discharge, and so forth) in Massachusetts. Stream sites used in the analysis were identified as perennial or intermittent on the basis of review of measured streamflow at sites throughout Massachusetts and on visual observation at sites in the South Coastal Basin, southeastern Massachusetts. Measured or observed zero flow(s) during months of extended drought as defined by the 310 Code of Massachusetts Regulations (CMR) 10.58(2)(a) were not considered when designating the perennial or intermittent status of a stream site. The database used to develop the equation included a total of 305 stream sites (84 intermittent- and 89 perennial-stream sites in the State, and 50 intermittent- and 82 perennial-stream sites in the South Coastal Basin). Stream sites included in the database had drainage areas that ranged from 0.14 to 8.94 square miles in the State and from 0.02 to 7.00 square miles in the South Coastal Basin.Results of the logistic regression analysis

  18. A methodology to estimate probability of occurrence of floods using principal component analysis

    NASA Astrophysics Data System (ADS)

    castro Heredia, L. M.; Gironas, J. A.

    2014-12-01

    Flood events and debris flows are characterized by a very rapid response of basins to precipitation, often resulting in loss of life and property damage. Complex topography with steep slopes and narrow valleys increase the likelihood of having these events. An early warning system (EWS) is a tool that allows anticipating a hazardous event, which in turns provides time for an early response to reduce negative impacts. These EWS's can rely on very powerful and computer-demanding models to predict flow discharges and inundation zones, which require data typically unavailable. Instead, simpler EWŚs based on a statistical analysis of observed hydro-meteorological data could be a good alternative. In this work we propose a methodology for estimating the probability of exceedance of maximum flowdischarges using principal components analysis (PCA). In the method we first perform a spatio-temporal cross-correlation analysis between extreme flows data and daily meteorological records for the last 15 days prior to the day of the flood event. We then use PCA to create synthetic variables which are representative of the meteorological variables associated with the flood event (i.e. cumulative rainfall and minimum temperature). Finally, we developed a model to explain the probability of exceedance using the principal components. The methodology was applied to a basin in the foothill area of Santiago, Chile, for which all the extreme events between 1970 and 2013 were analyzed.Results show that elevation rather than distance or location within the contributing basin is what mainly explains the statistical correlation between meteorologicalrecords and flood events. Two principal components were found that explain more than 90% of the total variance of the accumulated rainfalls and minimum temperatures. One component was formed with cumulative rainfall from 3 to 15 days prior to the event, whereas the other one was formed with the minimum temperatures for the last 2 days preceding

  19. Assessing Categorization Performance at the Individual Level: A Comparison of Monte Carlo Simulation and Probability Estimate Model Procedures

    PubMed Central

    Arterberry, Martha E.; Bornstein, Marc H.; Haynes, O. Maurice

    2012-01-01

    Two analytical procedures for identifying young children as categorizers, the Monte Carlo Simulation and the Probability Estimate Model, were compared. Using a sequential touching method, children age 12, 18, 24, and 30 months were given seven object sets representing different levels of categorical classification. From their touching performance, the probability that children were categorizing was then determined independently using Monte Carlo Simulation and the Probability Estimate Model. The two analytical procedures resulted in different percentages of children being classified as categorizers. Results using the Monte Carlo Simulation were more consistent with group-level analyses than results using the Probability Estimate Model. These findings recommend using the Monte Carlo Simulation for determining individual categorizer classification. PMID:21402410

  20. Estimating a neutral reference for electroencephalographic recordings: the importance of using a high-density montage and a realistic head model

    NASA Astrophysics Data System (ADS)

    Liu, Quanying; Balsters, Joshua H.; Baechinger, Marc; van der Groen, Onno; Wenderoth, Nicole; Mantini, Dante

    2015-10-01

    Objective. In electroencephalography (EEG) measurements, the signal of each recording electrode is contrasted with a reference electrode or a combination of electrodes. The estimation of a neutral reference is a long-standing issue in EEG data analysis, which has motivated the proposal of different re-referencing methods, among which linked-mastoid re-referencing (LMR), average re-referencing (AR) and reference electrode standardization technique (REST). In this study we quantitatively assessed the extent to which the use of a high-density montage and a realistic head model can impact on the optimal estimation of a neutral reference for EEG recordings. Approach. Using simulated recordings generated by projecting specific source activity over the sensors, we assessed to what extent AR, REST and LMR may distort the scalp topography. We examined the impact electrode coverage has on AR and REST, and how accurate the REST reconstruction is for realistic and less realistic (three-layer and single-layer spherical) head models, and with possible uncertainty in the electrode positions. We assessed LMR, AR and REST also in the presence of typical EEG artifacts that are mixed in the recordings. Finally, we applied them to real EEG data collected in a target detection experiment to corroborate our findings on simulated data. Main results. Both AR and REST have relatively low reconstruction errors compared to LMR, and that REST is less sensitive than AR and LMR to artifacts mixed in the EEG data. For both AR and REST, high electrode density yields low re-referencing reconstruction errors. A realistic head model is critical for REST, leading to a more accurate estimate of a neutral reference compared to spherical head models. With a low-density montage, REST shows a more reliable reconstruction than AR either with a realistic or a three-layer spherical head model. Conversely, with a high-density montage AR yields better results unless precise information on electrode positions

  1. Estimating a neutral reference for electroencephalographic recordings: The importance of using a high-density montage and a realistic head model

    PubMed Central

    Liu, Quanying; Balsters, Joshua H.; Baechinger, Marc; van der Groen, Onno; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Objective In electroencephalography (EEG) measurements, the signal of each recording electrode is contrasted with a reference electrode or a combination of electrodes. The estimation of a neutral reference is a long-standing issue in EEG data analysis, which has motivated the proposal of different re-referencing methods, among which linked-mastoid re-referencing (LMR), average re-referencing (AR) and reference electrode standardization technique (REST). In this study we quantitatively assessed the extent to which the use of a high-density montage and a realistic head model can impact on the optimal estimation of a neutral reference for EEG recordings. Approach Using simulated recordings generated by projecting specific source activity over the sensors, we assessed to what extent AR, REST and LMR may distort the scalp topography. We examined the impact electrode coverage has on AR and REST, and how accurate the REST reconstruction is for realistic and less realistic (three-layer and single-layer spherical) head models, and with possible uncertainty in the electrode positions. We assessed LMR, AR and REST also in the presence of typical EEG artifacts that are mixed in the recordings. Finally, we applied them to real EEG data collected in a target detection experiment to corroborate our findings on simulated data. Main results Both AR and REST have relatively low reconstruction errors compared to LMR, and that REST is less sensitive than AR and LMR to artifacts mixed in the EEG data. For both AR and REST, high electrode density yields low re-referencing reconstruction errors. A realistic head model is critical for REST, leading to a more accurate estimate of a neutral reference compared to spherical head models. With a low-density montage, REST shows a more reliable reconstruction than AR either with a realistic or a three-layer spherical head model. Conversely, with a high-density montage AR yields better results unless precise information on electrode positions is

  2. Estimated probability of arsenic in groundwater from bedrock aquifers in New Hampshire, 2011

    USGS Publications Warehouse

    Ayotte, Joseph D.; Cahillane, Matthew; Hayes, Laura; Robinson, Keith W.

    2012-01-01

    Probabilities of arsenic occurrence in groundwater from bedrock aquifers at concentrations of 1, 5, and 10 micrograms per liter (µg/L) were estimated during 2011 using multivariate logistic regression. These estimates were developed for use by the New Hampshire Environmental Public Health Tracking Program. About 39 percent of New Hampshire bedrock groundwater was identified as having at least a 50 percent chance of containing an arsenic concentration greater than or equal to 1 µg/L. This compares to about 7 percent of New Hampshire bedrock groundwater having at least a 50 percent chance of containing an arsenic concentration equaling or exceeding 5 µg/L and about 5 percent of the State having at least a 50 percent chance for its bedrock groundwater to contain concentrations at or above 10 µg/L. The southeastern counties of Merrimack, Strafford, Hillsborough, and Rockingham have the greatest potential for having arsenic concentrations above 5 and 10 µg/L in bedrock groundwater. Significant predictors of arsenic in groundwater from bedrock aquifers for all three thresholds analyzed included geologic, geochemical, land use, hydrologic, topographic, and demographic factors. Among the three thresholds evaluated, there were some differences in explanatory variables, but many variables were the same. More than 250 individual predictor variables were assembled for this study and tested as potential predictor variables for the models. More than 1,700 individual measurements of arsenic concentration from a combination of public and private water-supply wells served as the dependent (or predicted) variable in the models. The statewide maps generated by the probability models are not designed to predict arsenic concentration in any single well, but they are expected to provide useful information in areas of the State that currently contain little to no data on arsenic concentration. They also may aid in resource decision making, in determining potential risk for private

  3. A probability model for evaluating the bias and precision of influenza vaccine effectiveness estimates from case-control studies.

    PubMed

    Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A

    2015-05-01

    As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs. PMID:25147970

  4. Estimation of peak discharge quantiles for selected annual exceedance probabilities in northeastern Illinois

    USGS Publications Warehouse

    Over, Thomas; Saito, Riki J.; Veilleux, Andrea; Sharpe, Jennifer B.; Soong, David; Ishii, Audrey

    2016-01-01

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively) for watersheds in Illinois based on annual maximum peak discharge data from 117 watersheds in and near northeastern Illinois. One set of equations was developed through a temporal analysis with a two-step least squares-quantile regression technique that measures the average effect of changes in the urbanization of the watersheds used in the study. The resulting equations can be used to adjust rural peak discharge quantiles for the effect of urbanization, and in this study the equations also were used to adjust the annual maximum peak discharges from the study watersheds to 2010 urbanization conditions.The other set of equations was developed by a spatial analysis. This analysis used generalized least-squares regression to fit the peak discharge quantiles computed from the urbanization-adjusted annual maximum peak discharges from the study watersheds to drainage-basin characteristics. The peak discharge quantiles were computed by using the Expected Moments Algorithm following the removal of potentially influential low floods defined by a multiple Grubbs-Beck test. To improve the quantile estimates, generalized skew coefficients were obtained from a newly developed regional skew model in which the skew increases with the urbanized land use fraction. The drainage-basin characteristics used as explanatory variables in the spatial analysis include drainage area, the fraction of developed land, the fraction of land with poorly drained soils or likely water, and the basin slope estimated as the ratio of the basin relief to basin perimeter.This report also provides the following: (1) examples to illustrate the use of the spatial and urbanization-adjustment equations for estimating peak discharge quantiles at

  5. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  6. Estimation of the Probable Maximum Flood for a Small Lowland River in Poland

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2009-04-01

    The planning, designe and use of hydrotechnical structures often requires the assesment of maximu flood potentials. The most common term applied to this upper limit of flooding is the probable maximum flood (PMF). The PMP/UH (probable maximum precipitation/unit hydrograph) method has been used in the study to predict PMF from a small agricultural lowland river basin of Zagozdzonka (left tributary of Vistula river) in Poland. The river basin, located about 100 km south of Warsaw, with an area - upstream the gauge of Plachty - of 82 km2, has been investigated by Department of Water Engineering and Environmenal Restoration of Warsaw University of Life Sciences - SGGW since 1962. Over 40-year flow record was used in previous investigation for predicting T-year flood discharge (Banasik et al., 2003). The objective here was to estimate the PMF using the PMP/UH method and to compare the results with the 100-year flood. A new relation of depth-duration curve of PMP for the local climatic condition has been developed based on Polish maximum observed rainfall data (Ozga-Zielinska & Ozga-Zielinski, 2003). Exponential formula, with the value of exponent of 0.47, i.e. close to the exponent in formula for world PMP and also in the formula of PMP for Great Britain (Wilson, 1993), gives the rainfall depth about 40% lower than the Wilson's one. The effective rainfall (runoff volume) has been estimated from the PMP of various duration using the CN-method (USDA-SCS, 1986). The CN value as well as parameters of the IUH model (Nash, 1957) have been established from the 27 rainfall-runoff events, recorded in the river basin in the period 1980-2004. Varibility of the parameter values with the size of the events will be discussed in the paper. The results of the analyse have shown that the peak discharge of the PMF is 4.5 times larger then 100-year flood, and volume ratio of the respective direct hydrographs caused by rainfall events of critical duration is 4.0. References 1.Banasik K

  7. Using of bayesian networks to estimate the probability of "NATECH" scenario occurrence

    NASA Astrophysics Data System (ADS)

    Dobes, Pavel; Dlabka, Jakub; Jelšovská, Katarína; Polorecká, Mária; Baudišová, Barbora; Danihelka, Pavel

    2015-04-01

    In the twentieth century, implementation of Bayesian statistics and probability was not much used (may be it wasn't a preferred approach) in the area of natural and industrial risk analysis and management. Neither it was used within analysis of so called NATECH accidents (chemical accidents triggered by natural events, such as e.g. earthquakes, floods, lightning etc.; ref. E. Krausmann, 2011, doi:10.5194/nhess-11-921-2011). Main role, from the beginning, played here so called "classical" frequentist probability (ref. Neyman, 1937), which rely up to now especially on the right/false results of experiments and monitoring and didn't enable to count on expert's beliefs, expectations and judgements (which is, on the other hand, one of the once again well known pillars of Bayessian approach to probability). In the last 20 or 30 years, there is possible to observe, through publications and conferences, the Renaissance of Baysssian statistics into many scientific disciplines (also into various branches of geosciences). The necessity of a certain level of trust in expert judgment within risk analysis is back? After several decades of development on this field, it could be proposed following hypothesis (to be checked): "We couldn't estimate probabilities of complex crisis situations and their TOP events (many NATECH events could be classified as crisis situations or emergencies), only by classical frequentist approach, but also by using of Bayessian approach (i.e. with help of prestaged Bayessian Network including expert belief and expectation as well as classical frequentist inputs). Because - there is not always enough quantitative information from monitoring of historical emergencies, there could be several dependant or independant variables necessary to consider and in generally - every emergency situation always have a little different run." In this topic, team of authors presents its proposal of prestaged typized Bayessian network model for specified NATECH scenario

  8. Heuristics can produce surprisingly rational probability estimates: Comment on Costello and Watts (2014).

    PubMed

    Nilsson, Håkan; Juslin, Peter; Winman, Anders

    2016-01-01

    Costello and Watts (2014) present a model assuming that people's knowledge of probabilities adheres to probability theory, but that their probability judgments are perturbed by a random noise in the retrieval from memory. Predictions for the relationships between probability judgments for constituent events and their disjunctions and conjunctions, as well as for sums of such judgments were derived from probability theory. Costello and Watts (2014) report behavioral data showing that subjective probability judgments accord with these predictions. Based on the finding that subjective probability judgments follow probability theory, Costello and Watts (2014) conclude that the results imply that people's probability judgments embody the rules of probability theory and thereby refute theories of heuristic processing. Here, we demonstrate the invalidity of this conclusion by showing that all of the tested predictions follow straightforwardly from an account assuming heuristic probability integration (Nilsson, Winman, Juslin, & Hansson, 2009). We end with a discussion of a number of previous findings that harmonize very poorly with the predictions by the model suggested by Costello and Watts (2014). PMID:26709414

  9. An Illustration of Inverse Probability Weighting to Estimate Policy-Relevant Causal Effects.

    PubMed

    Edwards, Jessie K; Cole, Stephen R; Lesko, Catherine R; Mathews, W Christopher; Moore, Richard D; Mugavero, Michael J; Westreich, Daniel

    2016-08-15

    Traditional epidemiologic approaches allow us to compare counterfactual outcomes under 2 exposure distributions, usually 100% exposed and 100% unexposed. However, to estimate the population health effect of a proposed intervention, one may wish to compare factual outcomes under the observed exposure distribution to counterfactual outcomes under the exposure distribution produced by an intervention. Here, we used inverse probability weights to compare the 5-year mortality risk under observed antiretroviral therapy treatment plans to the 5-year mortality risk that would had been observed under an intervention in which all patients initiated therapy immediately upon entry into care among patients positive for human immunodeficiency virus in the US Centers for AIDS Research Network of Integrated Clinical Systems multisite cohort study between 1998 and 2013. Therapy-naïve patients (n = 14,700) were followed from entry into care until death, loss to follow-up, or censoring at 5 years or on December 31, 2013. The 5-year cumulative incidence of mortality was 11.65% under observed treatment plans and 10.10% under the intervention, yielding a risk difference of -1.57% (95% confidence interval: -3.08, -0.06). Comparing outcomes under the intervention with outcomes under observed treatment plans provides meaningful information about the potential consequences of new US guidelines to treat all patients with human immunodeficiency virus regardless of CD4 cell count under actual clinical conditions. PMID:27469514

  10. Moment-Based Probability Modeling and Extreme Response Estimation, The FITS Routine Version 1.2

    SciTech Connect

    MANUEL,LANCE; KASHEF,TINA; WINTERSTEIN,STEVEN R.

    1999-11-01

    This report documents the use of the FITS routine, which provides automated fits of various analytical, commonly used probability models from input data. It is intended to complement the previously distributed FITTING routine documented in RMS Report 14 (Winterstein et al., 1994), which implements relatively complex four-moment distribution models whose parameters are fit with numerical optimization routines. Although these four-moment fits can be quite useful and faithful to the observed data, their complexity can make them difficult to automate within standard fitting algorithms. In contrast, FITS provides more robust (lower moment) fits of simpler, more conventional distribution forms. For each database of interest, the routine estimates the distribution of annual maximum response based on the data values and the duration, T, over which they were recorded. To focus on the upper tails of interest, the user can also supply an arbitrary lower-bound threshold, {chi}{sub low}, above which a shifted distribution model--exponential or Weibull--is fit.

  11. Estimate of the probability of a lightning strike to the Galileo probe

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.

    1985-01-01

    Lightning strikes to aerospace vehicles occur mainly in or near clouds. As the Galileo entry probe will pass most of its operational life in the clouds of Jupiter, which is known to have lightning activity, the present study is concerned with the risk of a lightning strike to the probe. A strike to the probe could cause physical damage to the structure and/or damage to the electronic equipment aboard the probe. It is thought to be possible, for instance, that the instrument failures which occurred on all four Pioneer Venus entry probes at an altitude of 12 km were due to an external electric discharge. The probability of a lightning strike to the Galileo probe is evaluated. It is found that the estimate of a strike to the probe is only 0.001, which is about the same as the expected failure rate due to other design factors. In the case of entry probes to cloud-covered planets, a consideration of measures for protecting the vehicle and its payload from lightning appears to be appropriate.

  12. Estimate of the probability of a lightning strike to the Galileo probe

    NASA Astrophysics Data System (ADS)

    Borucki, W. J.

    1985-04-01

    Lightning strikes to aerospace vehicles occur mainly in or near clouds. As the Galileo entry probe will pass most of its operational life in the clouds of Jupiter, which is known to have lightning activity, the present study is concerned with the risk of a lightning strike to the probe. A strike to the probe could cause physical damage to the structure and/or damage to the electronic equipment aboard the probe. It is thought to be possible, for instance, that the instrument failures which occurred on all four Pioneer Venus entry probes at an altitude of 12 km were due to an external electric discharge. The probability of a lightning strike to the Galileo probe is evaluated. It is found that the estimate of a strike to the probe is only 0.001, which is about the same as the expected failure rate due to other design factors. In the case of entry probes to cloud-covered planets, a consideration of measures for protecting the vehicle and its payload from lightning appears to be appropriate.

  13. A New Approach to Estimating the Probability for β-delayed Neutron Emission

    SciTech Connect

    McCutchan, E.A.; Sonzogni, A.A.; Johnson, T.D.; Abriola, D.; Birch, M.; Singh, B.

    2014-06-15

    The probability for neutron emission following β decay, Pn, is a crucial property for a wide range of physics and applications including nuclear structure, r-process nucleosynthesis, the control of nuclear reactors, and the post-processing of nuclear fuel. Despite much experimental effort, knowledge of Pn values is still lacking in very neutron-rich nuclei, requiring predictions from either systematics or theoretical models. Traditionally, systematic predictions were made by investigating the Pn value as a function of the decay Q value and the neutron separation energy in the daughter nucleus. A new approach to Pn systematics is presented which incorporates the half-life of the decay and the Q value for β-delayed neutron emission. This prescription correlates the known data better, and thus improves the estimation of Pn values for neutron-rich nuclei. Such an approach can be applied to generate input values for r-process network calculations or in the modeling of advanced fuel cycles.

  14. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-06-01

    Dissolved dense nonaqueous-phase liquid plumes are persistent, widespread problems in the DOE complex. At the Idaho National Engineering and Environmental Laboratory, dissolved trichloroethylene (TCE) is disappearing from the Snake River Plain aquifer (SRPA) by natural attenuation, a finding that saves significant site restoration costs. Acceptance of monitored natural attenuation as a preferred treatment technology requires direct evidence of the processes and rates of the degradation. Our proposal aims to provide that evidence for one such site by testing two hypotheses. First, we believe that realistic values for in situ rates of TCE cometabolism can be obtained by sustaining the putative microorganisms at the low catabolic activities consistent with aquifer conditions. Second, the patterns of functional gene expression evident in these communities under starvation conditions while carrying out TCE cometabolism can be used to diagnose the cometabolic activity in the aquifer itself. Using the cometabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained at this location and validate the long-term stewardship of this plume. Realistic terms for cometabolism of TCE will provide marked improvements in DOE's ability to predict and monitor natural attenuation of chlorinated organics at other sites, increase the acceptability of this solution, and provide significant economic and health benefits through this noninvasive remediation strategy. Finally, this project aims to derive valuable genomic information about the functional attributes of subsurface microbial communities upon which DOE must depend to resolve some of its most difficult contamination issues.

  15. Nonparametric estimation of transition probabilities in the non-Markov illness-death model: A comparative study.

    PubMed

    de Uña-Álvarez, Jacobo; Meira-Machado, Luís

    2015-06-01

    Multi-state models are often used for modeling complex event history data. In these models the estimation of the transition probabilities is of particular interest, since they allow for long-term predictions of the process. These quantities have been traditionally estimated by the Aalen-Johansen estimator, which is consistent if the process is Markov. Several non-Markov estimators have been proposed in the recent literature, and their superiority with respect to the Aalen-Johansen estimator has been proved in situations in which the Markov condition is strongly violated. However, the existing estimators have the drawback of requiring that the support of the censoring distribution contains the support of the lifetime distribution, which is not often the case. In this article, we propose two new methods for estimating the transition probabilities in the progressive illness-death model. Some asymptotic results are derived. The proposed estimators are consistent regardless the Markov condition and the referred assumption about the censoring support. We explore the finite sample behavior of the estimators through simulations. The main conclusion of this piece of research is that the proposed estimators are much more efficient than the existing non-Markov estimators in most cases. An application to a clinical trial on colon cancer is included. Extensions to progressive processes beyond the three-state illness-death model are discussed. PMID:25735883

  16. Small-Area Estimation of the Probability of Toxocariasis in New York City Based on Sociodemographic Neighborhood Composition

    PubMed Central

    Walsh, Michael G.; Haseeb, M. A.

    2014-01-01

    Toxocariasis is increasingly recognized as an important neglected infection of poverty (NIP) in developed countries, and may constitute the most important NIP in the United States (US) given its association with chronic sequelae such as asthma and poor cognitive development. Its potential public health burden notwithstanding, toxocariasis surveillance is minimal throughout the US and so the true burden of disease remains uncertain in many areas. The Third National Health and Nutrition Examination Survey conducted a representative serologic survey of toxocariasis to estimate the prevalence of infection in diverse US subpopulations across different regions of the country. Using the NHANES III surveillance data, the current study applied the predicted probabilities of toxocariasis to the sociodemographic composition of New York census tracts to estimate the local probability of infection across the city. The predicted probability of toxocariasis ranged from 6% among US-born Latino women with a university education to 57% among immigrant men with less than a high school education. The predicted probability of toxocariasis exhibited marked spatial variation across the city, with particularly high infection probabilities in large sections of Queens, and smaller, more concentrated areas of Brooklyn and northern Manhattan. This investigation is the first attempt at small-area estimation of the probability surface of toxocariasis in a major US city. While this study does not define toxocariasis risk directly, it does provide a much needed tool to aid the development of toxocariasis surveillance in New York City. PMID:24918785

  17. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F. S.; Crawford, R. L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers. Third, we have systematically considered the aquifer contaminants at different locations in plumes at other DOE sites in order to determine whether MNA is a broadly applicable remediation strategy for chlorinated hydrocarbons (North Wind Inc.). Realistic terms for co-metabolism of TCE will provide marked improvements in DOE’s ability to predict and

  18. Probability weighted moments compared with some traditional techniques in estimating Gumbel parameters and quantiles.

    USGS Publications Warehouse

    Landwehr, J.M.; Matalas, N.C.; Wallis, J.R.

    1979-01-01

    Results were derived from Monte Carlo experiments by using both independent and serially correlated Gumbel numbers. The method of probability weighted moments was seen to compare favourably with two other techniques. -Authors

  19. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Colwell, F.S.; Crawford, R.L.; Sorenson, K.

    2005-09-01

    Acceptance of monitored natural attenuation (MNA) as a preferred treatment technology saves significant site restoration costs for DOE. However, in order to be accepted MNA requires direct evidence of which processes are responsible for the contaminant loss and also the rates of the contaminant loss. Our proposal aims to: 1) provide evidence for one example of MNA, namely the disappearance of the dissolved trichloroethylene (TCE) from the Snake River Plain aquifer (SRPA) at the Idaho National Laboratory’s Test Area North (TAN) site, 2) determine the rates at which aquifer microbes can co-metabolize TCE, and 3) determine whether there are other examples of natural attenuation of chlorinated solvents occurring at DOE sites. To this end, our research has several objectives. First, we have conducted studies to characterize the microbial processes that are likely responsible for the co-metabolic destruction of TCE in the aquifer at TAN (University of Idaho and INL). Second, we are investigating realistic rates of TCE co-metabolism at the low catabolic activities typical of microorganisms existing under aquifer conditions (INL). Using the co-metabolism rate parameters derived in low-growth bioreactors, we will complete the models that predict the time until background levels of TCE are attained in the aquifer at TAN and validate the long-term stewardship of this plume. Coupled with the research on low catabolic activities of co-metabolic microbes we are determining the patterns of functional gene expression by these cells, patterns that may be used to diagnose the co-metabolic activity in the SRPA or other aquifers.

  20. Statistics of Natural Populations. II. Estimating an Allele Probability in Families Descended from Cryptic Mothers

    PubMed Central

    Arnold, Jonathan; Morrison, Melvin L.

    1985-01-01

    In population studies, adults are frequently difficult or inconvenient to identify for genotype, but a family profile of genotypes can be obtained from an unidentified female crossed with a single unidentified male. The problem is to estimate an allele frequency in the cryptic parental gene pool from the observed family profiles. For example, a worker may wish to estimate inversion frequencies in Drosophila; inversion karyotypes are cryptic in adults but visible in salivary gland squashes from larvae. A simple mixture model, which assumes the Hardy-Weinberg law, Mendelian laws and a single randomly chosen mate per female, provides the vehicle for studying three competing estimators of an allele frequency. A simple, heuristically appealing estimator called the Dobzhansky estimator is compared with the maximum likelihood estimator and a close relative called the grouped profiles estimator. The Dobzhansky estimator is computationally simple, consistent and highly efficient and is recommended in practice over its competitors. PMID:17246258

  1. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    SciTech Connect

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  2. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes. PMID:16121722

  3. Empirical estimation of the conditional probability of natech events within the United States.

    PubMed

    Santella, Nicholas; Steinberg, Laura J; Aguirra, Gloria Andrea

    2011-06-01

    Natural disasters are the cause of a sizeable number of hazmat releases, referred to as "natechs." An enhanced understanding of natech probability, allowing for predictions of natech occurrence, is an important step in determining how industry and government should mitigate natech risk. This study quantifies the conditional probabilities of natechs at TRI/RMP and SICS 1311 facilities given the occurrence of hurricanes, earthquakes, tornadoes, and floods. During hurricanes, a higher probability of releases was observed due to storm surge (7.3 releases per 100 TRI/RMP facilities exposed vs. 6.2 for SIC 1311) compared to category 1-2 hurricane winds (5.6 TRI, 2.6 SIC 1311). Logistic regression confirms the statistical significance of the greater propensity for releases at RMP/TRI facilities, and during some hurricanes, when controlling for hazard zone. The probability of natechs at TRI/RMP facilities during earthquakes increased from 0.1 releases per 100 facilities at MMI V to 21.4 at MMI IX. The probability of a natech at TRI/RMP facilities within 25 miles of a tornado was small (∼0.025 per 100 facilities), reflecting the limited area directly affected by tornadoes. Areas inundated during flood events had a probability of 1.1 releases per 100 facilities but demonstrated widely varying natech occurrence during individual events, indicating that factors not quantified in this study such as flood depth and speed are important for predicting flood natechs. These results can inform natech risk analysis, aid government agencies responsible for planning response and remediation after natural disasters, and should be useful in raising awareness of natech risk within industry. PMID:21231945

  4. Combining earthquakes and GPS data to estimate the probability of future earthquakes with magnitude Mw ≥ 6.0

    NASA Astrophysics Data System (ADS)

    Chen, K.-P.; Tsai, Y.-B.; Chang, W.-Y.

    2013-10-01

    According to Wyss et al. (2000) result indicates that future main earthquakes can be expected along zones characterized by low b values. In this study we combine Benioff strain with global positioning system (GPS) data to estimate the probability of future Mw ≥ 6.0 earthquakes for a grid covering Taiwan. An approach similar to the maximum likelihood method was used to estimate Gutenberg-Richter parameters a and b. The two parameters were then used to estimate the probability of simulating future earthquakes of Mw ≥ 6.0 for each of the 391 grids (grid interval = 0.1°) covering Taiwan. The method shows a high probability of earthquakes in western Taiwan along a zone that extends from Taichung southward to Nantou, Chiayi, Tainan and Kaohsiung. In eastern Taiwan, there also exists a high probability zone from Ilan southward to Hualian and Taitung. These zones are characterized by high earthquake entropy, high maximum shear strain rates, and paths of low b values. A relation between entropy and maximum shear strain rate is also obtained. It indicates that the maximum shear strain rate is about 4.0 times the entropy. The results of this study should be of interest to city planners, especially those concerned with earthquake preparedness. And providing the earthquake insurers to draw up the basic premium.

  5. Coupling of Realistic Rate Estimates with Genomics for Assessing Contaminant Attenuation and Long-Term Plume Containment

    SciTech Connect

    Sorenson, Kent S. Jr.

    2003-06-01

    Environmental Technology Site, and the Savannah River Site. Detailed characterization data from the promising plumes is being entered into our database as it is received. The next step is to calculate natural attenuation half-life values for all of these plumes. We will next identify the plumes in which natural attenuation via aerobic degradation of TCE is fast enough that it may be relevant as a component of a remedy. We will then select at least one of these sites and either modify an existing groundwater transport model or, if necessary, create a new model, for this plume. This model will initially include first order decay of TCE, and degradation will be parameterized using the half-live values determined from the field data. The models will be used to simulate the evolution of the TCE plume and to predict concentrations as a function of time at property lines or other artificial boundaries, and where potential receptors are located. Ultimately rate data from th e laboratory studies being performed at INEEL will be incorporated into this model, as well as the model of the TAN site to provide a realistic prediction of degradation rates and plume longevity. Although identifying suitable TCE plumes and obtaining characterization data has taken longer than expected, this process has successfully identified the plumes needed for the detailed modeling activity without adversely impacting the project budget.

  6. How does new evidence change our estimates of probabilities? Carnap's formula revisited

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris

    1992-01-01

    The formula originally proposed by R. Carnap in his analysis of induction is reviewed and its natural generalization is presented. A situation is considered where the probability of a certain event is determined without using standard statistical methods due to the lack of observation.

  7. Accurate Estimation of the Entropy of Rotation-Translation Probability Distributions.

    PubMed

    Fogolari, Federico; Dongmo Foumthuim, Cedrix Jurgal; Fortuna, Sara; Soler, Miguel Angel; Corazza, Alessandra; Esposito, Gennaro

    2016-01-12

    The estimation of rotational and translational entropies in the context of ligand binding has been the subject of long-time investigations. The high dimensionality (six) of the problem and the limited amount of sampling often prevent the required resolution to provide accurate estimates by the histogram method. Recently, the nearest-neighbor distance method has been applied to the problem, but the solutions provided either address rotation and translation separately, therefore lacking correlations, or use a heuristic approach. Here we address rotational-translational entropy estimation in the context of nearest-neighbor-based entropy estimation, solve the problem numerically, and provide an exact and an approximate method to estimate the full rotational-translational entropy. PMID:26605696

  8. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  9. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  10. Use of portable antennas to estimate abundance of PIT-tagged fish in small streams: Factors affecting detection probability

    USGS Publications Warehouse

    O'Donnell, Matthew J.; Horton, Gregg E.; Letcher, Benjamin H.

    2010-01-01

    Portable passive integrated transponder (PIT) tag antenna systems can be valuable in providing reliable estimates of the abundance of tagged Atlantic salmon Salmo salar in small streams under a wide range of conditions. We developed and employed PIT tag antenna wand techniques in two controlled experiments and an additional case study to examine the factors that influenced our ability to estimate population size. We used Pollock's robust-design capture–mark–recapture model to obtain estimates of the probability of first detection (p), the probability of redetection (c), and abundance (N) in the two controlled experiments. First, we conducted an experiment in which tags were hidden in fixed locations. Although p and c varied among the three observers and among the three passes that each observer conducted, the estimates of N were identical to the true values and did not vary among observers. In the second experiment using free-swimming tagged fish, p and c varied among passes and time of day. Additionally, estimates of N varied between day and night and among age-classes but were within 10% of the true population size. In the case study, we used the Cormack–Jolly–Seber model to examine the variation in p, and we compared counts of tagged fish found with the antenna wand with counts collected via electrofishing. In that study, we found that although p varied for age-classes, sample dates, and time of day, antenna and electrofishing estimates of N were similar, indicating that population size can be reliably estimated via PIT tag antenna wands. However, factors such as the observer, time of day, age of fish, and stream discharge can influence the initial and subsequent detection probabilities.

  11. Comparing Two Different Methods to Evaluate Convariance-Matrix of Debris Orbit State in Collision Probability Estimation

    NASA Astrophysics Data System (ADS)

    Cheng, Haowen; Liu, Jing; Xu, Yang

    The evaluation of convariance-matrix is an inevitable step when estimating collision probability based on the theory. Generally, there are two different methods to compute convariance-matrix. One is so-called Tracking-Delta-Fitting method, first introduced when estimating the collision probability using TLE catalogue data, in which convariance-matrix is evaluated by fitting series of differences between propagated orbits of formal data and updated orbit data. In the second method, convariance-matrix is evaluated in the process of orbit determination. Both of the methods has there difficulties when introduced in collision probability estimation. In the first method, the value of convariance-matrix is evaluated based only on historical orbit data, ignoring information of latest orbit determination. As a result, the accuracy of the method strongly depends on the stability of convariance-matrix of latest updated orbit. In the second method, the evaluation of convariance-matrix is acceptable when the determined orbit satisfies weighted-least-square estimation, depending on the accuracy of observation error convariance, which is hard to obtain in real application, evaluated by analyzing the residuals of orbit determination in our research. In this paper we provided numerical tests to compare these two methods. A simulation of cataloguing objects in LEO, MEO and GEO regions has been carried out for a time span of 3 months. The influence of orbit maneuver has been included in GEO objects cataloguing simulation. For LEO objects cataloguing, the effect of atmospheric density variation has also been considered. At the end of the paper accuracies of evaluated convariance-matrix and estimated collision probability have been tested and compared.

  12. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    NASA Astrophysics Data System (ADS)

    de Gregorio, Sofia; Camarda, Marco

    2016-07-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  13. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  14. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  15. Estimating the probability distribution of von Mises stress for structures undergoing random excitation. Part 1: Derivation

    SciTech Connect

    Segalman, D.; Reese, G.

    1998-09-01

    The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. For deterministic loads, both static and dynamic, the calculation of von Mises stress is straightforward, as is the resulting calculation of reliability. For loads modeled as random processes, the task is different; the response to such loads is itself a random process and its properties must be determined in terms of those of both the loads and the system. This has been done in the past by Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the problem. Here, the authors present a method that provides analytic expressions for the probability distributions of von Mises stress which can be evaluated efficiently and with good precision numerically. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution.

  16. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Onar, Arzu; Bayliss, Jon; Ludwig, Larry

    2009-01-01

    In this experiment, an empirical model to quantify the probability of occurrence of an electrical short circuit from tin whiskers as a function of voltage was developed. This empirical model can be used to improve existing risk simulation models. FIB and TEM images of a tin whisker confirm the rare polycrystalline structure on one of the three whiskers studied. FIB cross-section of the card guides verified that the tin finish was bright tin.

  17. INCLUDING TRANSITION PROBABILITIES IN NEST SURVIVAL ESTIMATION: A MAYFIELD MARKOV CHAIN

    EPA Science Inventory

    This manuscript is primarily an exploration of the statistical properties of nest-survival estimates for terrestrial songbirds. The Mayfield formulation described herein should allow researchers to test for complicated effects of stressors on daily survival and overall success, i...

  18. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the

  19. Realistic loss estimation due to the mirror surfaces in a 10 meters-long high finesse Fabry-Perot filter-cavity.

    PubMed

    Straniero, Nicolas; Degallaix, Jérôme; Flaminio, Raffaele; Pinard, Laurent; Cagnoli, Gianpietro

    2015-08-10

    In order to benefit over the entire frequency range from the injection of squeezed vacuum light at the output of laser gravitational wave detectors, a small bandwidth high finesse cavity is required. In this paper, we investigate the light losses due to the flatness and the roughness of realistic mirrors in a 10 meters-long Fabry-Perot filter cavity. Using measurements of commercial super-polished mirrors, we were able to estimate the cavity round trip losses separating the loss contribution from low and high spatial frequencies. By careful tuning of the cavity g-factor and the incident position of the light on the mirrors, round trip losses due to imperfect mirror surfaces as low as 3 ppm can be achieved in the simulations. PMID:26367993

  20. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    SciTech Connect

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    2009-03-05

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  1. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2015-06-01

    Riverbank erosion affects river morphology and local habitat and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict vulnerable to erosion areas is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a combined deterministic and statistical methodology is proposed to predict the probability of presence or absence of erosion in a river section. A physically based model determines the vulnerable to erosion locations by quantifying the potential eroded area. The derived results are used to determine validation locations for the statistical tool performance evaluation. The statistical tool is based on a series of independent local variables and employs the Logistic Regression methodology. It is developed in two forms, Logistic Regression and Locally Weighted Logistic Regression, which both deliver useful and accurate results. The second form though provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed methodology is easy to use, accurate and can be applied to any region and river.

  2. Development of a statistical tool for the estimation of riverbank erosion probability

    NASA Astrophysics Data System (ADS)

    Varouchakis, E. A.; Giannakis, G. V.; Lilli, M. A.; Ioannidou, E.; Nikolaidis, N. P.; Karatzas, G. P.

    2016-01-01

    Riverbank erosion affects river morphology and local habitat, and results in riparian land loss, property and infrastructure damage, and ultimately flood defence weakening. An important issue concerning riverbank erosion is the identification of the vulnerable areas in order to predict river changes and assist stream management/restoration. An approach to predict areas vulnerable to erosion is to quantify the erosion probability by identifying the underlying relations between riverbank erosion and geomorphological or hydrological variables that prevent or stimulate erosion. In the present work, a statistical methodology is proposed to predict the probability of the presence or absence of erosion in a river section. A physically based model determines the locations vulnerable to erosion by quantifying the potential eroded area. The derived results are used to determine validation locations for the evaluation of the statistical tool performance. The statistical tool is based on a series of independent local variables and employs the logistic regression methodology. It is developed in two forms, logistic regression and locally weighted logistic regression, which both deliver useful and accurate results. The second form, though, provides the most accurate results as it validates the presence or absence of erosion at all validation locations. The proposed tool is easy to use and accurate and can be applied to any region and river.

  3. EVALUATING PROBABILITY SAMPLING STRATEGIES FOR ESTIMATING REDD COUNTS: AN EXAMPLE WITH CHINOOK SALMON (Oncorhynchus tshawytscha)

    EPA Science Inventory

    Precise, unbiased estimates of population size are an essential tool for fisheries management. For a wide variety of salmonid fishes, redd counts from a sample of reaches are commonly used to monitor annual trends in abundance. Using a 9-year time series of georeferenced censuses...

  4. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  5. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  6. Estimating present day extreme water level exceedance probabilities around the coastline of Australia: tropical cyclone-induced storm surges

    NASA Astrophysics Data System (ADS)

    Haigh, Ivan D.; MacPherson, Leigh R.; Mason, Matthew S.; Wijeratne, E. M. S.; Pattiaratchi, Charitha B.; Crompton, Ryan P.; George, Steve

    2014-01-01

    The incidence of major storm surges in the last decade have dramatically emphasized the immense destructive capabilities of extreme water level events, particularly when driven by severe tropical cyclones. Given this risk, it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood and erosion management, engineering and for future land-use planning and to ensure the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. Australia has a long history of coastal flooding from tropical cyclones. Using a novel integration of two modeling techniques, this paper provides the first estimates of present day extreme water level exceedance probabilities around the whole coastline of Australia, and the first estimates that combine the influence of astronomical tides, storm surges generated by both extra-tropical and tropical cyclones, and seasonal and inter-annual variations in mean sea level. Initially, an analysis of tide gauge records has been used to assess the characteristics of tropical cyclone-induced surges around Australia. However, given the dearth (temporal and spatial) of information around much of the coastline, and therefore the inability of these gauge records to adequately describe the regional climatology, an observationally based stochastic tropical cyclone model has been developed to synthetically extend the tropical cyclone record to 10,000 years. Wind and pressure fields derived for these synthetically generated events have then been used to drive a hydrodynamic model of the Australian continental shelf region with annual maximum water levels extracted to estimate exceedance probabilities around the coastline. To validate this methodology, selected historic storm surge events have been simulated and resultant storm surges compared with gauge records. Tropical cyclone induced exceedance probabilities have been combined with

  7. Estimated Probability of Traumatic Abdominal Injury During an International Space Station Mission

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth E.; Brooker, John E.; Weavr, Aaron S.; Myers, Jerry G., Jr.; McRae, Michael P.

    2013-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to spaceflight mission planners and medical system designers when assessing risks and optimizing medical systems. The IMM project maintains a database of medical conditions that could occur during a spaceflight. The IMM project is in the process of assigning an incidence rate, the associated functional impairment, and a best and a worst case end state for each condition. The purpose of this work was to develop the IMM Abdominal Injury Module (AIM). The AIM calculates an incidence rate of traumatic abdominal injury per person-year of spaceflight on the International Space Station (ISS). The AIM was built so that the probability of traumatic abdominal injury during one year on ISS could be predicted. This result will be incorporated into the IMM Abdominal Injury Clinical Finding Form and used within the parent IMM model.

  8. Estimation of the probability of exposure to metalworking fluids in a population-based case-control study

    PubMed Central

    Park, Dong-Uk; Colt, Joanne S.; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R.; Armenti, Karla R.; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe here an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (10-90%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally, 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and the US production levels by decade found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. PMID:25256317

  9. Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias

    PubMed Central

    Howe, Chanelle J.; Cole, Stephen R.; Chmiel, Joan S.; Muñoz, Alvaro

    2011-01-01

    In time-to-event analyses, artificial censoring with correction for induced selection bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984–2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed. PMID:21289029

  10. A physically-based earthquake recurrence model for estimation of long-term earthquake probabilities

    USGS Publications Warehouse

    Ellsworth, William L.; Matthews, Mark V.; Nadeau, Robert M.; Nishenko, Stuart P.; Reasenberg, Paul A.; Simpson, Robert W.

    1999-01-01

    A physically-motivated model for earthquake recurrence based on the Brownian relaxation oscillator is introduced. The renewal process defining this point process model can be described by the steady rise of a state variable from the ground state to failure threshold as modulated by Brownian motion. Failure times in this model follow the Brownian passage time (BPT) distribution, which is specified by the mean time to failure, μ, and the aperiodicity of the mean, α (equivalent to the familiar coefficient of variation). Analysis of 37 series of recurrent earthquakes, M -0.7 to 9.2, suggests a provisional generic value of α = 0.5. For this value of α, the hazard function (instantaneous failure rate of survivors) exceeds the mean rate for times > μ⁄2, and is ~ ~ 2 ⁄ μ for all times > μ. Application of this model to the next M 6 earthquake on the San Andreas fault at Parkfield, California suggests that the annual probability of the earthquake is between 1:10 and 1:13.

  11. Student Estimates of Probability and Uncertainty in Advanced Laboratory and Statistical Physics Courses

    NASA Astrophysics Data System (ADS)

    Mountcastle, Donald B.; Bucy, Brandon R.; Thompson, John R.

    2007-11-01

    Equilibrium properties of macroscopic systems are highly predictable as n, the number of particles approaches and exceeds Avogadro's number; theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity (ω) (where S = k ln(ω)) include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our statistical mechanics students usually gave reasonable answers about the probabilities, but not the relative uncertainties, of the predicted outcomes of such events. However, they reliably predicted that the uncertainty in a measured continuous quantity (e.g., the amount of rainfall) does decrease as the number of measurements increases. Typical textbook presentations assume that students understand that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. This is at odds with our findings, even though most of our students had previously completed mathematics courses in statistics, as well as an advanced electronics laboratory course that included statistical analysis of distributions of dart scores as n increased.

  12. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    PubMed Central

    He, Bin; Du, Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-01-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT∕CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of

  13. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake

    SciTech Connect

    He Bin; Du Yong; Segars, W. Paul; Wahl, Richard L.; Sgouros, George; Jacene, Heather; Frey, Eric C.

    2009-02-15

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans. Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  14. Evaluation of quantitative imaging methods for organ activity and residence time estimation using a population of phantoms having realistic variations in anatomy and uptake.

    PubMed

    He, Bin; Du, Yong; Segars, W Paul; Wahl, Richard L; Sgouros, George; Jacene, Heather; Frey, Eric C

    2009-02-01

    Estimating organ residence times is an essential part of patient-specific dosimetry for radioimmunotherapy (RIT). Quantitative imaging methods for RIT are often evaluated using a single physical or simulated phantom but are intended to be applied clinically where there is variability in patient anatomy, biodistribution, and biokinetics. To provide a more relevant evaluation, the authors have thus developed a population of phantoms with realistic variations in these factors and applied it to the evaluation of quantitative imaging methods both to find the best method and to demonstrate the effects of these variations. Using whole body scans and SPECT/CT images, organ shapes and time-activity curves of 111In ibritumomab tiuxetan were measured in dosimetrically important organs in seven patients undergoing a high dose therapy regimen. Based on these measurements, we created a 3D NURBS-based cardiac-torso (NCAT)-based phantom population. SPECT and planar data at realistic count levels were then simulated using previously validated Monte Carlo simulation tools. The projections from the population were used to evaluate the accuracy and variation in accuracy of residence time estimation methods that used a time series of SPECT and planar scans, Quantitative SPECT (QSPECT) reconstruction methods were used that compensated for attenuation, scatter, and the collimator-detector response. Planar images were processed with a conventional (CPlanar) method that used geometric mean attenuation and triple-energy window scatter compensation and a quantitative planar (QPlanar) processing method that used model-based compensation for image degrading effects. Residence times were estimated from activity estimates made at each of five time points. The authors also evaluated hybrid methods that used CPlanar or QPlanar time-activity curves rescaled to the activity estimated from a single QSPECT image. The methods were evaluated in terms of mean relative error and standard deviation of the

  15. An Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers-Part I

    NASA Technical Reports Server (NTRS)

    Courey, Karim; Wright, Clara; Asfour, Shihab; Bayliss, Jon; Ludwig, Larry

    2008-01-01

    Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that has a currently unknown probability associated with it. Due to contact resistance, electrical shorts may not occur at lower voltage levels. In this experiment, we study the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From this data, we can estimate the probability of an electrical short, as a function of voltage, given that a free tin whisker has bridged two adjacent exposed electrical conductors. In addition, three tin whiskers grown from the same Space Shuttle Orbiter card guide used in the aforementioned experiment were cross sectioned and studied using a focused ion beam (FIB).

  16. Estimated probability of postwildfire debris flows in the 2012 Whitewater-Baldy Fire burn area, southwestern New Mexico

    USGS Publications Warehouse

    Tillery, Anne C.; Matherne, Anne Marie; Verdin, Kristine L.

    2012-01-01

    In May and June 2012, the Whitewater-Baldy Fire burned approximately 1,200 square kilometers (300,000 acres) of the Gila National Forest, in southwestern New Mexico. The burned landscape is now at risk of damage from postwildfire erosion, such as that caused by debris flows and flash floods. This report presents a preliminary hazard assessment of the debris-flow potential from 128 basins burned by the Whitewater-Baldy Fire. A pair of empirical hazard-assessment models developed by using data from recently burned basins throughout the intermountain Western United States was used to estimate the probability of debris-flow occurrence and volume of debris flows along the burned area drainage network and for selected drainage basins within the burned area. The models incorporate measures of areal burned extent and severity, topography, soils, and storm rainfall intensity to estimate the probability and volume of debris flows following the fire. In response to the 2-year-recurrence, 30-minute-duration rainfall, modeling indicated that four basins have high probabilities of debris-flow occurrence (greater than or equal to 80 percent). For the 10-year-recurrence, 30-minute-duration rainfall, an additional 14 basins are included, and for the 25-year-recurrence, 30-minute-duration rainfall, an additional eight basins, 20 percent of the total, have high probabilities of debris-flow occurrence. In addition, probability analysis along the stream segments can identify specific reaches of greatest concern for debris flows within a basin. Basins with a high probability of debris-flow occurrence were concentrated in the west and central parts of the burned area, including tributaries to Whitewater Creek, Mineral Creek, and Willow Creek. Estimated debris-flow volumes ranged from about 3,000-4,000 cubic meters (m3) to greater than 500,000 m3 for all design storms modeled. Drainage basins with estimated volumes greater than 500,000 m3 included tributaries to Whitewater Creek, Willow

  17. A hierarchical model combining distance sampling and time removal to estimate detection probability during avian point counts

    USGS Publications Warehouse

    Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.

    2014-01-01

    Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point

  18. Developing an Empirical Model for Estimating the Probability of Electrical Short Circuits from Tin Whiskers. Part 2

    NASA Technical Reports Server (NTRS)

    Courey, Karim J.; Asfour, Shihab S.; Onar, Arzu; Bayliss, Jon A.; Ludwig, Larry L.; Wright, Maria C.

    2009-01-01

    To comply with lead-free legislation, many manufacturers have converted from tin-lead to pure tin finishes of electronic components. However, pure tin finishes have a greater propensity to grow tin whiskers than tin-lead finishes. Since tin whiskers present an electrical short circuit hazard in electronic components, simulations have been developed to quantify the risk of said short circuits occurring. Existing risk simulations make the assumption that when a free tin whisker has bridged two adjacent exposed electrical conductors, the result is an electrical short circuit. This conservative assumption is made because shorting is a random event that had an unknown probability associated with it. Note however that due to contact resistance electrical shorts may not occur at lower voltage levels. In our first article we developed an empirical probability model for tin whisker shorting. In this paper, we develop a more comprehensive empirical model using a refined experiment with a larger sample size, in which we studied the effect of varying voltage on the breakdown of the contact resistance which leads to a short circuit. From the resulting data we estimated the probability distribution of an electrical short, as a function of voltage. In addition, the unexpected polycrystalline structure seen in the focused ion beam (FIB) cross section in the first experiment was confirmed in this experiment using transmission electron microscopy (TEM). The FIB was also used to cross section two card guides to facilitate the measurement of the grain size of each card guide's tin plating to determine its finish.

  19. Spinodal Decomposition for the Cahn-Hilliard Equation in Higher Dimensions.Part I: Probability and Wavelength Estimate

    NASA Astrophysics Data System (ADS)

    Maier-Paape, Stanislaus; Wanner, Thomas

    This paper is the first in a series of two papers addressing the phenomenon of spinodal decomposition for the Cahn-Hilliard equation where , is a bounded domain with sufficiently smooth boundary, and f is cubic-like, for example f(u) =u-u3. We will present the main ideas of our approach and explain in what way our method differs from known results in one space dimension due to Grant [26]. Furthermore, we derive certain probability and wavelength estimates. The probability estimate is needed to understand why in a neighborhood of a homogeneous equilibrium u0≡μ of the Cahn-Hilliard equation, with mass μ in the spinodal region, a strongly unstable manifold has dominating effects. This is demonstrated for the linearized equation, but will be essential for the nonlinear setting in the second paper [37] as well. Moreover, we introduce the notion of a characteristic wavelength for the strongly unstable directions.

  20. Estimation of reliability and dynamic property for polymeric material at high strain rate using SHPB technique and probability theory

    NASA Astrophysics Data System (ADS)

    Kim, Dong Hyeok; Lee, Ouk Sub; Kim, Hong Min; Choi, Hye Bin

    2008-11-01

    A modified Split Hopkinson Pressure Bar technique with aluminum pressure bars and a pulse shaper technique to achieve a closer impedance match between the pressure bars and the specimen materials such as hot temperature degraded POM (Poly Oxy Methylene) and PP (Poly Propylene). The more distinguishable experimental signals were obtained to evaluate the more accurate dynamic deformation behavior of materials under a high strain rate loading condition. A pulse shaping technique is introduced to reduce the non-equilibrium on the dynamic material response by modulation of the incident wave during a short period of test. This increases the rise time of the incident pulse in the SHPB experiment. For the dynamic stress strain curve obtained from SHPB experiment, the Johnson-Cook model is applied as a constitutive equation. The applicability of this constitutive equation is verified by using the probabilistic reliability estimation method. Two reliability methodologies such as the FORM and the SORM have been proposed. The limit state function(LSF) includes the Johnson-Cook model and applied stresses. The LSF in this study allows more statistical flexibility on the yield stress than a paper published before. It is found that the failure probability estimated by using the SORM is more reliable than those of the FORM/ It is also noted that the failure probability increases with increase of the applied stress. Moreover, it is also found that the parameters of Johnson-Cook model such as A and n, and the applied stress are found to affect the failure probability more severely than the other random variables according to the sensitivity analysis.

  1. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  2. Bayesian Estimates of Transition Probabilities in Seven Small Lithophytic Orchid Populations: Maximizing Data Availability from Many Small Samples

    PubMed Central

    Tremblay, Raymond L.; McCarthy, Michael A.

    2014-01-01

    Predicting population dynamics for rare species is of paramount importance in order to evaluate the likelihood of extinction and planning conservation strategies. However, evaluating and predicting population viability can be hindered from a lack of data. Rare species frequently have small populations, so estimates of vital rates are often very uncertain due to lack of data. We evaluated the vital rates of seven small populations from two watersheds with varying light environment of a common epiphytic orchid using Bayesian methods of parameter estimation. From the Lefkovitch matrices we predicted the deterministic population growth rates, elasticities, stable stage distributions and the credible intervals of the statistics. Populations were surveyed on a monthly basis between 18–34 months. In some of the populations few or no transitions in some of the vital rates were observed throughout the sampling period, however, we were able to predict the most likely vital rates using a Bayesian model that incorporated the transitions rates from the other populations. Asymptotic population growth rate varied among the seven orchid populations. There was little difference in population growth rate among watersheds even though it was expected because of physical differences as a result of differing canopy cover and watershed width. Elasticity analyses of Lepanthes rupestris suggest that growth rate is more sensitive to survival followed by growth, shrinking and the reproductive rates. The Bayesian approach helped to estimate transition probabilities that were uncommon or variable in some populations. Moreover, it increased the precision of the parameter estimates as compared to traditional approaches. PMID:25068598

  3. Estimation of the probability of exposure to machining fluids in a population-based case-control study.

    PubMed

    Park, Dong-Uk; Colt, Joanne S; Baris, Dalsu; Schwenn, Molly; Karagas, Margaret R; Armenti, Karla R; Johnson, Alison; Silverman, Debra T; Stewart, Patricia A

    2014-01-01

    We describe an approach for estimating the probability that study subjects were exposed to metalworking fluids (MWFs) in a population-based case-control study of bladder cancer. Study subject reports on the frequency of machining and use of specific MWFs (straight, soluble, and synthetic/semi-synthetic) were used to estimate exposure probability when available. Those reports also were used to develop estimates for job groups, which were then applied to jobs without MWF reports. Estimates using both cases and controls and controls only were developed. The prevalence of machining varied substantially across job groups (0.1->0.9%), with the greatest percentage of jobs that machined being reported by machinists and tool and die workers. Reports of straight and soluble MWF use were fairly consistent across job groups (generally 50-70%). Synthetic MWF use was lower (13-45%). There was little difference in reports by cases and controls vs. controls only. Approximately, 1% of the entire study population was assessed as definitely exposed to straight or soluble fluids in contrast to 0.2% definitely exposed to synthetic/semi-synthetics. A comparison between the reported use of the MWFs and U.S. production levels found high correlations (r generally >0.7). Overall, the method described here is likely to have provided a systematic and reliable ranking that better reflects the variability of exposure to three types of MWFs than approaches applied in the past. [Supplementary materials are available for this article. Go to the publisher's online edition of Journal of Occupational and Environmental Hygiene for the following free supplemental resources: a list of keywords in the occupational histories that were used to link study subjects to the metalworking fluids (MWFs) modules; recommendations from the literature on selection of MWFs based on type of machining operation, the metal being machined and decade; popular additives to MWFs; the number and proportion of controls who

  4. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  5. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  6. Effects of river reach discretization on the estimation of the probability of levee failure owing to piping

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Maurizio; Brandimarte, Luigia; Barontini, Stefano; Ranzi, Roberto

    2014-05-01

    Over the centuries many societies have preferred to settle down nearby floodplains area and take advantage of the favorable environmental conditions. Due to changing hydro-meteorological conditions, over time, levee systems along rivers have been raised to protect urbanized area and reduce the impact of floods. As expressed by the so called "levee paradox", many societies might to tend to trust these levee protection systems due to an induced sense of safety and, as a consequence, invest even more in urban developing in levee protected flood prone areas. As a result, considering also the increasing number of population around the world, people living in floodplains is growing. However, human settlements in floodplains are not totally safe and have been continuously endangered by the risk of flooding. In fact, failures of levee system in case of flood event have also produced the most devastating disasters of the last two centuries due to the exposure of the developed floodprone areas to risk. In those cases, property damage is certain, but loss of life can vary dramatically with the extent of the inundation area, the size of the population at risk, and the amount of warning time available. The aim of this study is to propose an innovative methodology to estimate the reliability of a general river levee system in case of piping, considering different sources of uncertainty, and analyze the influence of different discretization of the river reach in sub-reaches in the evaluation of the probability of failure. The reliability analysis, expressed in terms of fragility curve, was performed evaluating the probability of failure, conditioned by a given hydraulic load in case of a certain levee failure mechanism, using a Monte Carlo and First Order Reliability Method. Knowing the information about fragility curve for each discrete levee reach, different fragility indexes were introduced. Using the previous information was then possible to classify the river into sub

  7. CFD modelling of most probable bubble nucleation rate from binary mixture with estimation of components' mole fraction in critical cluster

    NASA Astrophysics Data System (ADS)

    Hong, Ban Zhen; Keong, Lau Kok; Shariff, Azmi Mohd

    2016-05-01

    The employment of different mathematical models to address specifically for the bubble nucleation rates of water vapour and dissolved air molecules is essential as the physics for them to form bubble nuclei is different. The available methods to calculate bubble nucleation rate in binary mixture such as density functional theory are complicated to be coupled along with computational fluid dynamics (CFD) approach. In addition, effect of dissolved gas concentration was neglected in most study for the prediction of bubble nucleation rates. The most probable bubble nucleation rate for the water vapour and dissolved air mixture in a 2D quasi-stable flow across a cavitating nozzle in current work was estimated via the statistical mean of all possible bubble nucleation rates of the mixture (different mole fractions of water vapour and dissolved air) and the corresponding number of molecules in critical cluster. Theoretically, the bubble nucleation rate is greatly dependent on components' mole fraction in a critical cluster. Hence, the dissolved gas concentration effect was included in current work. Besides, the possible bubble nucleation rates were predicted based on the calculated number of molecules required to form a critical cluster. The estimation of components' mole fraction in critical cluster for water vapour and dissolved air mixture was obtained by coupling the enhanced classical nucleation theory and CFD approach. In addition, the distribution of bubble nuclei of water vapour and dissolved air mixture could be predicted via the utilisation of population balance model.

  8. Potential confounds in estimating trial-to-trial correlations between neuronal response and behavior using choice probabilities

    PubMed Central

    Maunsell, John H. R.

    2012-01-01

    Correlations between trial-to-trial fluctuations in the responses of individual sensory neurons and perceptual reports, commonly quantified with choice probability (CP), have been widely used as an important tool for assessing the contributions of neurons to behavior. These correlations are usually weak and often require a large number of trials for a reliable estimate. Therefore, working with measures such as CP warrants care in data analysis as well as rigorous controls during data collection. Here we identify potential confounds that can arise in data analysis and lead to biased estimates of CP, and suggest methods to avoid the bias. In particular, we show that the common practice of combining neuronal responses across different stimulus conditions with z-score normalization can result in an underestimation of CP when the ratio of the numbers of trials for the two behavioral response categories differs across the stimulus conditions. We also discuss the effects of using variable time intervals for quantifying neuronal response on CP measurements. Finally, we demonstrate that serious artifacts can arise in reaction time tasks that use varying measurement intervals if the mean neuronal response and mean behavioral performance vary over time within trials. To emphasize the importance of addressing these concerns in neurophysiological data, we present a set of data collected from V1 cells in macaque monkeys while the animals performed a detection task. PMID:22993262

  9. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  10. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-01-01

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  11. Probability Distribution Estimated From the Minimum, Maximum, and Most Likely Values: Applied to Turbine Inlet Temperature Uncertainty

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.

    2004-01-01

    Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal

  12. Model Assembly for Estimating Cell Surviving Fraction for Both Targeted and Nontargeted Effects Based on Microdosimetric Probability Densities

    PubMed Central

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  13. Model assembly for estimating cell surviving fraction for both targeted and nontargeted effects based on microdosimetric probability densities.

    PubMed

    Sato, Tatsuhiko; Hamada, Nobuyuki

    2014-01-01

    We here propose a new model assembly for estimating the surviving fraction of cells irradiated with various types of ionizing radiation, considering both targeted and nontargeted effects in the same framework. The probability densities of specific energies in two scales, which are the cell nucleus and its substructure called a domain, were employed as the physical index for characterizing the radiation fields. In the model assembly, our previously established double stochastic microdosimetric kinetic (DSMK) model was used to express the targeted effect, whereas a newly developed model was used to express the nontargeted effect. The radioresistance caused by overexpression of anti-apoptotic protein Bcl-2 known to frequently occur in human cancer was also considered by introducing the concept of the adaptive response in the DSMK model. The accuracy of the model assembly was examined by comparing the computationally and experimentally determined surviving fraction of Bcl-2 cells (Bcl-2 overexpressing HeLa cells) and Neo cells (neomycin resistant gene-expressing HeLa cells) irradiated with microbeam or broadbeam of energetic heavy ions, as well as the WI-38 normal human fibroblasts irradiated with X-ray microbeam. The model assembly reproduced very well the experimentally determined surviving fraction over a wide range of dose and linear energy transfer (LET) values. Our newly established model assembly will be worth being incorporated into treatment planning systems for heavy-ion therapy, brachytherapy, and boron neutron capture therapy, given critical roles of the frequent Bcl-2 overexpression and the nontargeted effect in estimating therapeutic outcomes and harmful effects of such advanced therapeutic modalities. PMID:25426641

  14. Maximum likelihood estimation of label imperfection probabilities and its use in the identification of mislabeled patterns. [with application to Landsat MSS data processing

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1980-01-01

    Estimating label imperfections and the use of estimations in the identification of mislabeled patterns are discussed. Expressions are presented for the asymptotic variances of the probability of correct classification and proportion, and for the maximum likelihood estimates of classification errors and a priori probabilities. Models are developed for imperfections in the labels and classification errors, and expressions are derived for the probability of imperfect label identification schemes resulting in wrong decisions. The expressions are used in computing thresholds and the techniques are given practical applications. The imperfect label identification scheme in the multiclass case is found to amount to establishing a region around each decision surface, and decisions of the label correction scheme are found in close agreement with the analyst-interpreter interpretations of the imagery films. As an example, the application of the maximum likelihood estimation to the processing of Landsat MSS data is discussed.

  15. Estimating debris-flow probability using fan stratigraphy, historic records, and drainage-basin morphology, Interstate 70 highway corridor, central Colorado, U.S.A

    USGS Publications Warehouse

    Coe, J.A.; Godt, J.W.; Parise, M.; Moscariello, A.

    2003-01-01

    We have used stratigraphic and historic records of debris-flows to estimate mean recurrence intervals of past debris-flow events on 19 fans along the Interstate 70 highway corridor in the Front Range of Colorado. Estimated mean recurrence intervals were used in the Poisson probability model to estimate the probability of future debris-flow events on the fans. Mean recurrence intervals range from 7 to about 2900 years. Annual probabilities range from less than 0.1% to about 13%. A regression analysis of mean recurrence interval data and drainage-basin morphometry yields a regression model that may be suitable to estimate mean recurrence intervals on fans with no stratigraphic or historic records. Additional work is needed to verify this model. ?? 2003 Millpress.

  16. Developing a Methodology for Eliciting Subjective Probability Estimates During Expert Evaluations of Safety Interventions: Application for Bayesian Belief Networks

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.a

    2005-01-01

    The NASA Aviation Safety Program (AvSP) has defined several products that will potentially modify airline and/or ATC operations, enhance aircraft systems, and improve the identification of potential hazardous situations within the National Airspace System (NAS). Consequently, there is a need to develop methods for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the judgments to develop Bayesian Belief Networks (BBN's) that model the potential impact that specific interventions may have. Specifically, the present report summarizes methodologies for improving the elicitation of probability estimates during expert evaluations of AvSP products for use in BBN's. The work involved joint efforts between Professor James Luxhoj from Rutgers University and researchers at the University of Illinois. The Rutgers' project to develop BBN's received funding by NASA entitled "Probabilistic Decision Support for Evaluating Technology Insertion and Assessing Aviation Safety System Risk." The proposed project was funded separately but supported the existing Rutgers' program.

  17. Aftershocks hazard in Italy Part I: Estimation of time-magnitude distribution model parameters and computation of probabilities of occurrence

    NASA Astrophysics Data System (ADS)

    Lolli, Barbara; Gasperini, Paolo

    We analyzed the available instrumental data on Italian earthquakes from1960 to 1996 to compute the parameters of the time-magnitudedistribution model proposed by Reasenberg and Jones (1989) andcurrently used to make aftershock forecasting in California. From 1981 to1996 we used the recently released Catalogo Strumentale deiTerremoti `Italiani' (CSTI) (Instrumental Catalog Working Group, 2001)joining the data of the Istituto Nazionale di Geofisica e Vulcanologia(INGV) and of the Italian major local seismic network, with magnituderevalued according to Gasperini (2001). From 1960 to 1980 we usedinstead the Progetto Finalizzato Geodinamica (PFG) catalog(Postpischl, 1985) with magnitude corrected to be homogeneous with thefollowing period. About 40 sequences are detected using two differentalgorithms and the results of the modeling for the corresponding ones arecompared. The average values of distribution parameters (p= 0.93±0.21, Log10(c) = -1.53±0.54, b = 0.96±0.18 and a = -1.66±0.72) are in fair agreementwith similar computations performed in other regions of the World. We alsoanalyzed the spatial variation of model parameters that can be used topredict the sequence behavior in the first days of future Italian seismic crisis,before a reliable modeling of the ongoing sequence is available. Moreoversome nomograms to expeditiously estimate probabilities and rates ofaftershock in Italy are also computed.

  18. Model approach to estimate the probability of accepting a lot of heterogeneously contaminated powdered food using different sampling strategies.

    PubMed

    Valero, Antonio; Pasquali, Frédérique; De Cesare, Alessandra; Manfreda, Gerardo

    2014-08-01

    Current sampling plans assume a random distribution of microorganisms in food. However, food-borne pathogens are estimated to be heterogeneously distributed in powdered foods. This spatial distribution together with very low level of contaminations raises concern of the efficiency of current sampling plans for the detection of food-borne pathogens like Cronobacter and Salmonella in powdered foods such as powdered infant formula or powdered eggs. An alternative approach based on a Poisson distribution of the contaminated part of the lot (Habraken approach) was used in order to evaluate the probability of falsely accepting a contaminated lot of powdered food when different sampling strategies were simulated considering variables such as lot size, sample size, microbial concentration in the contaminated part of the lot and proportion of contaminated lot. The simulated results suggest that a sample size of 100g or more corresponds to the lower number of samples to be tested in comparison with sample sizes of 10 or 1g. Moreover, the number of samples to be tested greatly decrease if the microbial concentration is 1CFU/g instead of 0.1CFU/g or if the proportion of contamination is 0.05 instead of 0.01. Mean contaminations higher than 1CFU/g or proportions higher than 0.05 did not impact on the number of samples. The Habraken approach represents a useful tool for risk management in order to design a fit-for-purpose sampling plan for the detection of low levels of food-borne pathogens in heterogeneously contaminated powdered food. However, it must be outlined that although effective in detecting pathogens, these sampling plans are difficult to be applied since the huge number of samples that needs to be tested. Sampling does not seem an effective measure to control pathogens in powdered food. PMID:24462218

  19. Converged three-dimensional quantum mechanical reaction probabilities for the F + H2 reaction on a potential energy surface with realistic entrance and exit channels and comparisons to results for three other surfaces

    NASA Technical Reports Server (NTRS)

    Lynch, Gillian C.; Halvick, Philippe; Zhao, Meishan; Truhlar, Donald G.; Yu, Chin-Hui; Kouri, Donald J.; Schwenke, David W.

    1991-01-01

    Accurate three-dimensional quantum mechanical reaction probabilities are presented for the reaction F + H2 yields HF + H on the new global potential energy surface 5SEC for total angular momentum J = 0 over a range of translational energies from 0.15 to 4.6 kcal/mol. It is found that the v-prime = 3 HF vibrational product state has a threshold as low as for v-prime = 2.

  20. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach<probability obtained with the gradient stochastic approach≤probability predicted by Davis and Stoll < probability predicted by Martin et al. The differences are explained by the positive bias of the Martin equation and the lower average resolution observed for the isocratic simulations compared to the gradient simulations with the same peak capacity. When the stochastic results are applied to conventional HPLC and sequential elution liquid chromatography (SE-LC), the latter is shown to provide much greater probabilities of success for moderately complex samples (e.g., PHPLC=31.2% versus PSE-LC=69.1% for 12 components and the same analysis time). For a given number of components, the density of probability data provided over the range of peak capacities is sufficient to allow accurate interpolation of probabilities for peak capacities not reported, <1.5% error for saturation factors <0.20. Additional applications for the stochastic approach include isothermal and programmed-temperature gas chromatography. PMID:27286646

  1. Three-dimensional heart dose reconstruction to estimate normal tissue complication probability after breast irradiation using portal dosimetry

    SciTech Connect

    Louwe, R. J. W.; Wendling, M.; Herk, M. B. van; Mijnheer, B. J.

    2007-04-15

    Irradiation of the heart is one of the major concerns during radiotherapy of breast cancer. Three-dimensional (3D) treatment planning would therefore be useful but cannot always be performed for left-sided breast treatments, because CT data may not be available. However, even if 3D dose calculations are available and an estimate of the normal tissue damage can be made, uncertainties in patient positioning may significantly influence the heart dose during treatment. Therefore, 3D reconstruction of the actual heart dose during breast cancer treatment using electronic imaging portal device (EPID) dosimetry has been investigated. A previously described method to reconstruct the dose in the patient from treatment portal images at the radiological midsurface was used in combination with a simple geometrical model of the irradiated heart volume to enable calculation of dose-volume histograms (DVHs), to independently verify this aspect of the treatment without using 3D data from a planning CT scan. To investigate the accuracy of our method, the DVHs obtained with full 3D treatment planning system (TPS) calculations and those obtained after resampling the TPS dose in the radiological midsurface were compared for fifteen breast cancer patients for whom CT data were available. In addition, EPID dosimetry as well as 3D dose calculations using our TPS, film dosimetry, and ionization chamber measurements were performed in an anthropomorphic phantom. It was found that the dose reconstructed using EPID dosimetry and the dose calculated with the TPS agreed within 1.5% in the lung/heart region. The dose-volume histograms obtained with EPID dosimetry were used to estimate the normal tissue complication probability (NTCP) for late excess cardiac mortality. Although the accuracy of these NTCP calculations might be limited due to the uncertainty in the NTCP model, in combination with our portal dosimetry approach it allows incorporation of the actual heart dose. For the anthropomorphic

  2. Estimating present day extreme water level exceedance probabilities around the coastline of Australia: tides, extra-tropical storm surges and mean sea level

    NASA Astrophysics Data System (ADS)

    Haigh, Ivan D.; Wijeratne, E. M. S.; MacPherson, Leigh R.; Pattiaratchi, Charitha B.; Mason, Matthew S.; Crompton, Ryan P.; George, Steve

    2014-01-01

    The occurrence of extreme water levels along low-lying, highly populated and/or developed coastlines can lead to considerable loss of life and billions of dollars of damage to coastal infrastructure. Therefore it is vitally important that the exceedance probabilities of extreme water levels are accurately evaluated to inform risk-based flood management, engineering and future land-use planning. This ensures the risk of catastrophic structural failures due to under-design or expensive wastes due to over-design are minimised. This paper estimates for the first time present day extreme water level exceedence probabilities around the whole coastline of Australia. A high-resolution depth averaged hydrodynamic model has been configured for the Australian continental shelf region and has been forced with tidal levels from a global tidal model and meteorological fields from a global reanalysis to generate a 61-year hindcast of water levels. Output from this model has been successfully validated against measurements from 30 tide gauge sites. At each numeric coastal grid point, extreme value distributions have been fitted to the derived time series of annual maxima and the several largest water levels each year to estimate exceedence probabilities. This provides a reliable estimate of water level probabilities around southern Australia; a region mainly impacted by extra-tropical cyclones. However, as the meteorological forcing used only weakly includes the effects of tropical cyclones, extreme water level probabilities are underestimated around the western, northern and north-eastern Australian coastline. In a companion paper we build on the work presented here and more accurately include tropical cyclone-induced surges in the estimation of extreme water level. The multi-decadal hindcast generated here has been used primarily to estimate extreme water level exceedance probabilities but could be used more widely in the future for a variety of other research and practical

  3. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  4. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  5. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  6. Hate Crimes and Stigma-Related Experiences among Sexual Minority Adults in the United States: Prevalence Estimates from a National Probability Sample

    ERIC Educational Resources Information Center

    Herek, Gregory M.

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or…

  7. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies.

    PubMed

    Austin, Peter C; Stuart, Elizabeth A

    2015-12-10

    The propensity score is defined as a subject's probability of treatment selection, conditional on observed baseline covariates. Weighting subjects by the inverse probability of treatment received creates a synthetic sample in which treatment assignment is independent of measured baseline covariates. Inverse probability of treatment weighting (IPTW) using the propensity score allows one to obtain unbiased estimates of average treatment effects. However, these estimates are only valid if there are no residual systematic differences in observed baseline characteristics between treated and control subjects in the sample weighted by the estimated inverse probability of treatment. We report on a systematic literature review, in which we found that the use of IPTW has increased rapidly in recent years, but that in the most recent year, a majority of studies did not formally examine whether weighting balanced measured covariates between treatment groups. We then proceed to describe a suite of quantitative and qualitative methods that allow one to assess whether measured baseline covariates are balanced between treatment groups in the weighted sample. The quantitative methods use the weighted standardized difference to compare means, prevalences, higher-order moments, and interactions. The qualitative methods employ graphical methods to compare the distribution of continuous baseline covariates between treated and control subjects in the weighted sample. Finally, we illustrate the application of these methods in an empirical case study. We propose a formal set of balance diagnostics that contribute towards an evolving concept of 'best practice' when using IPTW to estimate causal treatment effects using observational data. PMID:26238958

  8. Estimates of mean consequences and confidence bounds on the mean associated with low-probability seismic events in total system performance assessments

    SciTech Connect

    Pensado, Osvaldo; Mancillas, James

    2007-07-01

    An approach is described to estimate mean consequences and confidence bounds on the mean of seismic events with low probability of breaching components of the engineered barrier system. The approach is aimed at complementing total system performance assessment models used to understand consequences of scenarios leading to radionuclide releases in geologic nuclear waste repository systems. The objective is to develop an efficient approach to estimate mean consequences associated with seismic events of low probability, employing data from a performance assessment model with a modest number of Monte Carlo realizations. The derived equations and formulas were tested with results from a specific performance assessment model. The derived equations appear to be one method to estimate mean consequences without having to use a large number of realizations. (authors)

  9. A comparison of conventional capture versus PIT reader techniques for estimating survival and capture probabilities of big brown bats (Eptesicus fuscus)

    USGS Publications Warehouse

    Ellison, L.E.; O'Shea, T.J.; Neubaum, D.J.; Neubaum, M.A.; Pearce, R.D.; Bowen, R.A.

    2007-01-01

    We compared conventional capture (primarily mist nets and harp traps) and passive integrated transponder (PIT) tagging techniques for estimating capture and survival probabilities of big brown bats (Eptesicus fuscus) roosting in buildings in Fort Collins, Colorado. A total of 987 female adult and juvenile bats were captured and marked by subdermal injection of PIT tags during the summers of 2001-2005 at five maternity colonies in buildings. Openings to roosts were equipped with PIT hoop-style readers, and exit and entry of bats were passively monitored on a daily basis throughout the summers of 2002-2005. PIT readers 'recaptured' adult and juvenile females more often than conventional capture events at each roost. Estimates of annual capture probabilities for all five colonies were on average twice as high when estimated from PIT reader data (P?? = 0.93-1.00) than when derived from conventional techniques (P?? = 0.26-0.66), and as a consequence annual survival estimates were more precisely estimated when using PIT reader encounters. Short-term, daily capture estimates were also higher using PIT readers than conventional captures. We discuss the advantages and limitations of using PIT tags and passive encounters with hoop readers vs. conventional capture techniques for estimating these vital parameters in big brown bats. ?? Museum and Institute of Zoology PAS.

  10. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Part 2, Human Error Probability (HEP) estimates: Data manual

    SciTech Connect

    Gertman, D.I.; Gilbert, B.G.; Gilmore, W.E.; Galyean, W.J.

    1988-06-01

    This volume of a five-volume series summarizes those data currently resident in the first releases of the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) data base. The raw human error probability (HEP) contained herein is accompanied by a glossary of terms and the HEP and hardware taxonomies used to structure the data. Instructions are presented on how the user may navigate through the NUCLARR data management system to find anchor values to assist in solving risk-related problems.

  11. Double-ended break probability estimate for the 304 stainless steel main circulation piping of a production reactor

    SciTech Connect

    Mehta, H.S.; Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L.

    1991-12-31

    The large break frequency resulting from intergranular stress corrosion cracking in the main circulation piping of the Savannah River Site (SRS) production reactors has been estimated. Four factors are developed to describe the likelihood that a crack exists that is not identified by ultrasonic inspection, and that grows to instability prior to growing through-wall and being detected by the ensuing leakage. The estimated large break frequency is 3.4 {times} 10{sup {minus}8} per reactor-year.

  12. Double-ended break probability estimate for the 304 stainless steel main circulation piping of a production reactor

    SciTech Connect

    Mehta, H.S. ); Daugherty, W.L.; Awadalla, N.G.; Sindelar, R.L. )

    1991-01-01

    The large break frequency resulting from intergranular stress corrosion cracking in the main circulation piping of the Savannah River Site (SRS) production reactors has been estimated. Four factors are developed to describe the likelihood that a crack exists that is not identified by ultrasonic inspection, and that grows to instability prior to growing through-wall and being detected by the ensuing leakage. The estimated large break frequency is 3.4 {times} 10{sup {minus}8} per reactor-year.

  13. Estimation of the age-specific per-contact probability of Ebola virus transmission in Liberia using agent-based simulations

    NASA Astrophysics Data System (ADS)

    Siettos, Constantinos I.; Anastassopoulou, Cleo; Russo, Lucia; Grigoras, Christos; Mylonakis, Eleftherios

    2016-06-01

    Based on multiscale agent-based computations we estimated the per-contact probability of transmission by age of the Ebola virus disease (EVD) that swept through Liberia from May 2014 to March 2015. For the approximation of the epidemic dynamics we have developed a detailed agent-based model with small-world interactions between individuals categorized by age. For the estimation of the structure of the evolving contact network as well as the per-contact transmission probabilities by age group we exploited the so called Equation-Free framework. Model parameters were fitted to official case counts reported by the World Health Organization (WHO) as well as to recently published data of key epidemiological variables, such as the mean time to death, recovery and the case fatality rate.

  14. Workshop on models to estimate military system probability of effect (P/sub E/) due to incident radiofrequency energy: Volume 3, Written inputs from reviewers and other participants

    SciTech Connect

    Not Available

    1988-01-01

    The workshop on Models to Estimate Military System P/sub E/ (probability of effect) due to Incident Radio Frequency (RF) Energy was convened by Dr. John M. MacCallum, OUSDA (RandAT/EST), to assess the current state of the art and to evaluate the adequacy of ongoing effects assessment efforts to estimate P/sub E/. Approximately fifty people from government, industry, and academia attended the meeting. Specifically, the workshop addressed the following: (1) current status of operations research models for assessing probability of effect (P/sub E/) for red and blue mission analyses; (2) the main overall approaches for evaluating P/sub E/'s; (3) sources of uncertainty and ways P/sub E/'s could be credibly derived from the existing data base; and (4) the adequacy of the present framework of a national HPM assessment methodology for evaluation of P/sub E/'s credibility for future systems. 9 figs.

  15. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  16. Efficiency of using correlation function for estimation of probability of substance detection on the base of THz spectral dynamics

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Peskov, Nikolay V.; Kirillov, Dmitry A.

    2012-10-01

    One of the problems arising in Time-Domain THz spectroscopy for the problem of security is the developing the criteria for assessment of probability for the detection and identification of the explosive and drugs. We analyze the efficiency of using the correlation function and another functional (more exactly, spectral norm) for this aim. These criteria are applied to spectral lines dynamics. For increasing the reliability of the assessment we subtract the averaged value of THz signal during time of analysis of the signal: it means deleting the constant from this part of the signal. Because of this, we can increase the contrast of assessment. We compare application of the Fourier-Gabor transform with unbounded (for example, Gaussian) window, which slides along the signal, for finding the spectral lines dynamics with application of the Fourier transform in short time interval (FTST), in which the Fourier transform is applied to parts of the signals, for the same aim. These methods are close each to other. Nevertheless, they differ by series of frequencies which they use. It is important for practice that the optimal window shape depends on chosen method for obtaining the spectral dynamics. The probability enhancements if we can find the train of pulses with different frequencies, which follow sequentially. We show that there is possibility to get pure spectral lines dynamics even under the condition of distorted spectrum of the substance response on the action of the THz pulse.

  17. Estimation of the detection probability for Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) with a passive acoustic method.

    PubMed

    Akamatsu, T; Wang, D; Wang, K; Li, S; Dong, S; Zhao, X; Barlow, J; Stewart, B S; Richlen, M

    2008-06-01

    Yangtze finless porpoises were surveyed by using simultaneous visual and acoustical methods from 6 November to 13 December 2006. Two research vessels towed stereo acoustic data loggers, which were used to store the intensity and sound source direction of the high frequency sonar signals produced by finless porpoises at detection ranges up to 300 m on each side of the vessel. Simple stereo beam forming allowed the separation of distinct biosonar sound source, which enabled us to count the number of vocalizing porpoises. Acoustically, 204 porpoises were detected from one vessel and 199 from the other vessel in the same section of the Yangtze River. Visually, 163 and 162 porpoises were detected from two vessels within 300 m of the vessel track. The calculated detection probability using acoustic method was approximately twice that for visual detection for each vessel. The difference in detection probabilities between the two methods was caused by the large number of single individuals that were missed by visual observers. However, the sizes of large groups were underestimated by using the acoustic methods. Acoustic and visual observations complemented each other in the accurate detection of porpoises. The use of simple, relatively inexpensive acoustic monitoring systems should enhance population surveys of free-ranging, echolocating odontocetes. PMID:18537391

  18. Realistic collective nuclear Hamiltonian

    SciTech Connect

    Dufour, M.; Zuker, A.P.

    1996-10-01

    The residual part of the realistic forces{emdash}obtained after extracting the monopole terms responsible for bulk properties{emdash}is strongly dominated by pairing and quadrupole interactions, with important {sigma}{tau}{center_dot}{sigma}{tau}, octupole, and hexadecapole contributions. Their forms retain the simplicity of the traditional pairing plus multipole models, while eliminating their flaws through a normalization mechanism dictated by a universal {ital A}{sup {minus}1/3} scaling. Coupling strengths and effective charges are calculated and shown to agree with empirical values. Comparisons between different realistic interactions confirm the claim that they are very similar. {copyright} {ital 1996 The American Physical Society.}

  19. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for

  20. Realistic and Schematic Visuals.

    ERIC Educational Resources Information Center

    Heuvelman, Ard

    1996-01-01

    A study examined three different visual formats (studio presenter only, realistic visuals, or schematic visuals) of an educational television program. Recognition and recall of the abstract subject matter were measured in 101 adult viewers directly after the program and 3 months later. The schematic version yielded better recall of the program,…

  1. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  2. A Review of Mycotoxins in Food and Feed Products in Portugal and Estimation of Probable Daily Intakes.

    PubMed

    Abrunhosa, Luís; Morales, Héctor; Soares, Célia; Calado, Thalita; Vila-Chã, Ana Sofia; Pereira, Martinha; Venâncio, Armando

    2016-01-01

    Mycotoxins are toxic secondary metabolites produced by filamentous fungi that occur naturally in agricultural commodities worldwide. Aflatoxins, ochratoxin A, patulin, fumonisins, zearalenone, trichothecenes, and ergot alkaloids are presently the most important for food and feed safety. These compounds are produced by several species that belong to the Aspergillus, Penicillium, Fusarium, and Claviceps genera and can be carcinogenic, mutagenic, teratogenic, cytotoxic, neurotoxic, nephrotoxic, estrogenic, and immunosuppressant. Human and animal exposure to mycotoxins is generally assessed by taking into account data on the occurrence of mycotoxins in food and feed as well as data on the consumption patterns of the concerned population. This evaluation is crucial to support measures to reduce consumer exposure to mycotoxins. This work reviews the occurrence and levels of mycotoxins in Portuguese food and feed to provide a global overview of this issue in Portugal. With the information collected, the exposure of the Portuguese population to those mycotoxins is assessed, and the estimated dietary intakes are presented. PMID:24987806

  3. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-Down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has hung up. That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down experiencing a "hang-up". The results of loads analyses performed for (4) stud hang-ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  4. Common Cause Case Study: An Estimated Probability of Four Solid Rocket Booster Hold-down Post Stud Hang-ups

    NASA Technical Reports Server (NTRS)

    Cross, Robert

    2005-01-01

    Until Solid Rocket Motor ignition, the Space Shuttle is mated to the Mobil Launch Platform in part via eight (8) Solid Rocket Booster (SRB) hold-down bolts. The bolts are fractured using redundant pyrotechnics, and are designed to drop through a hold-down post on the Mobile Launch Platform before the Space Shuttle begins movement. The Space Shuttle program has experienced numerous failures where a bolt has "hung-up." That is, it did not clear the hold-down post before liftoff and was caught by the SRBs. This places an additional structural load on the vehicle that was not included in the original certification requirements. The Space Shuttle is currently being certified to withstand the loads induced by up to three (3) of eight (8) SRB hold-down post studs experiencing a "hang-up." The results af loads analyses performed for four (4) stud-hang ups indicate that the internal vehicle loads exceed current structural certification limits at several locations. To determine the risk to the vehicle from four (4) stud hang-ups, the likelihood of the scenario occurring must first be evaluated. Prior to the analysis discussed in this paper, the likelihood of occurrence had been estimated assuming that the stud hang-ups were completely independent events. That is, it was assumed that no common causes or factors existed between the individual stud hang-up events. A review of the data associated with the hang-up events, showed that a common factor (timing skew) was present. This paper summarizes a revised likelihood evaluation performed for the four (4) stud hang-ups case considering that there are common factors associated with the stud hang-ups. The results show that explicitly (i.e. not using standard common cause methodologies such as beta factor or Multiple Greek Letter modeling) taking into account the common factor of timing skew results in an increase in the estimated likelihood of four (4) stud hang-ups of an order of magnitude over the independent failure case.

  5. Nonparametric Estimation of the Probability of Detection of Flaws in an Industrial Component, from Destructive and Nondestructive Testing Data, Using Approximate Bayesian Computation.

    PubMed

    Keller, Merlin; Popelin, Anne-Laure; Bousquet, Nicolas; Remy, Emmanuel

    2015-09-01

    We consider the problem of estimating the probability of detection (POD) of flaws in an industrial steel component. Modeled as an increasing function of the flaw height, the POD characterizes the detection process; it is also involved in the estimation of the flaw size distribution, a key input parameter of physical models describing the behavior of the steel component when submitted to extreme thermodynamic loads. Such models are used to assess the resistance of highly reliable systems whose failures are seldom observed in practice. We develop a Bayesian method to estimate the flaw size distribution and the POD function, using flaw height measures from periodic in-service inspections conducted with an ultrasonic detection device, together with measures from destructive lab experiments. Our approach, based on approximate Bayesian computation (ABC) techniques, is applied to a real data set and compared to maximum likelihood estimation (MLE) and a more classical approach based on Markov Chain Monte Carlo (MCMC) techniques. In particular, we show that the parametric model describing the POD as the cumulative distribution function (cdf) of a log-normal distribution, though often used in this context, can be invalidated by the data at hand. We propose an alternative nonparametric model, which assumes no predefined shape, and extend the ABC framework to this setting. Experimental results demonstrate the ability of this method to provide a flexible estimation of the POD function and describe its uncertainty accurately. PMID:26414699

  6. Occurrence probability of slopes on the lunar surface: Estimate by the shaded area percentage in the LROC NAC images

    NASA Astrophysics Data System (ADS)

    Abdrakhimov, A. M.; Basilevsky, A. T.; Ivanov, M. A.; Kokhanov, A. A.; Karachevtseva, I. P.; Head, J. W.

    2015-09-01

    The paper describes the method of estimating the distribution of slopes by the portion of shaded areas measured in the images acquired at different Sun elevations. The measurements were performed for the benefit of the Luna-Glob Russian mission. The western ellipse for the spacecraft landing in the crater Bogus-lawsky in the southern polar region of the Moon was investigated. The percentage of the shaded area was measured in the images acquired with the LROC NAC camera with a resolution of ~0.5 m. Due to the close vicinity of the pole, it is difficult to build digital terrain models (DTMs) for this region from the LROC NAC images. Because of this, the method described has been suggested. For the landing ellipse investigated, 52 LROC NAC images obtained at the Sun elevation from 4° to 19° were used. In these images the shaded portions of the area were measured, and the values of these portions were transferred to the values of the occurrence of slopes (in this case, at the 3.5-m baseline) with the calibration by the surface characteristics of the Lunokhod-1 study area. For this area, the digital terrain model of the ~0.5-m resolution and 13 LROC NAC images obtained at different elevations of the Sun are available. From the results of measurements and the corresponding calibration, it was found that, in the studied landing ellipse, the occurrence of slopes gentler than 10° at the baseline of 3.5 m is 90%, while it is 9.6, 5.7, and 3.9% for the slopes steeper than 10°, 15°, and 20°, respectively. Obviously, this method can be recommended for application if there is no DTM of required granularity for the regions of interest, but there are high-resolution images taken at different elevations of the Sun.

  7. Estimating the probability distribution of the incubation period for rabies using data from the 1948-1954 rabies epidemic in Tokyo.

    PubMed

    Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K

    2016-01-01

    Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs. PMID:26688561

  8. On Modeling Human Leukocyte Antigen-Identical Sibling Match Probability for Allogeneic Hematopoietic Cell Transplantation: Estimating the Need for an Unrelated Donor Source.

    PubMed

    Besse, Kelsey; Maiers, Martin; Confer, Dennis; Albrecht, Mark

    2016-03-01

    Prior studies of allogeneic hematopoietic cell transplantation (HCT) therapy for the treatment of malignant or nonmalignant blood disorders assume a 30% likelihood that a patient will find a match among siblings and, therefore, a 70% likelihood of needing an unrelated donor source. This study utilizes birth data and statistical modeling to assess the adequacy of these estimates to describe the probability among US population cohorts segmented by race/ethnicity and age, including ages of greatest HCT utilization. Considerable variation in the likelihood of an HLA-identical sibling was found, ranging from 13% to 51%, depending upon patient age and race/ethnicity. Low sibling match probability, compounded with increased genetic diversity and lower availability among unrelated donors, put the youngest minority patients at the greatest risk for not finding a suitable related or unrelated HCT donor. Furthermore, the present 40-year decline in birth rates is expected to lead to 1.5-fold decrease in access to a matched sibling for today's young adults (18 to 44 years of age) when they reach peak HCT utilization years (near age 61 years) versus their contemporary adult counterparts (44 to 64 years). Understanding the sibling match probability by race/ethnicity and age cohort leads to forecasting the demand for unrelated HCT sources. PMID:26403513

  9. Multi-scale occupancy approach to estimate Toxoplasma gondii prevalence and detection probability in tissues: an application and guide for field sampling.

    PubMed

    Elmore, Stacey A; Huyvaert, Kathryn P; Bailey, Larissa L; Iqbal, Asma; Su, Chunlei; Dixon, Brent R; Alisauskas, Ray T; Gajadhar, Alvin A; Jenkins, Emily J

    2016-08-01

    Increasingly, birds are recognised as important hosts for the ubiquitous parasite Toxoplasma gondii, although little experimental evidence exists to determine which tissues should be tested to maximise the detection probability of T. gondii. Also, Arctic-nesting geese are suspected to be important sources of T. gondii in terrestrial Arctic ecosystems, but the parasite has not previously been reported in the tissues of these geese. Using a domestic goose model, we applied a multi-scale occupancy framework to demonstrate that the probability of detection of T. gondii was highest in the brain (0.689, 95% confidence interval=0.486, 0.839) and the heart (0.809, 95% confidence interval=0.693, 0.888). Inoculated geese had an estimated T. gondii infection probability of 0.849, (95% confidence interval=0.643, 0.946), highlighting uncertainty in the system, even under experimental conditions. Guided by these results, we tested the brains and hearts of wild Ross's Geese (Chen rossii, n=50) and Lesser Snow Geese (Chen caerulescens, n=50) from Karrak Lake, Nunavut, Canada. We detected 51 suspected positive tissue samples from 33 wild geese using real-time PCR with melt-curve analysis. The wild goose prevalence estimates generated by our multi-scale occupancy analysis were higher than the naïve estimates of prevalence, indicating that multiple PCR repetitions on the same organs and testing more than one organ could improve T. gondii detection. Genetic characterisation revealed Type III T. gondii alleles in six wild geese and Sarcocystis spp. in 25 samples. Our study demonstrates that Arctic nesting geese are capable of harbouring T. gondii in their tissues and could transport the parasite from their southern overwintering grounds into the Arctic region. We demonstrate how a multi-scale occupancy framework can be used in a domestic animal model to guide resource-limited sample collection and tissue analysis in wildlife. Secondly, we confirm the value of traditional occupancy in

  10. Estimating reach-specific fish movement probabilities in rivers with a Bayesian state-space model: application to sea lamprey passage and capture at dams

    USGS Publications Warehouse

    Holbrook, Christopher M.; Johnson, Nicholas S.; Steibel, Juan P.; Twohey, Michael B.; Binder, Thomas R.; Krueger, Charles C.; Jones, Michael L.

    2014-01-01

    Improved methods are needed to evaluate barriers and traps for control and assessment of invasive sea lamprey (Petromyzon marinus) in the Great Lakes. A Bayesian state-space model provided reach-specific probabilities of movement, including trap capture and dam passage, for 148 acoustic tagged invasive sea lamprey in the lower Cheboygan River, Michigan, a tributary to Lake Huron. Reach-specific movement probabilities were combined to obtain estimates of spatial distribution and abundance needed to evaluate a barrier and trap complex for sea lamprey control and assessment. Of an estimated 21 828 – 29 300 adult sea lampreys in the river, 0%–2%, or 0–514 untagged lampreys, could have passed upstream of the dam, and 46%–61% were caught in the trap. Although no tagged lampreys passed above the dam (0/148), our sample size was not sufficient to consider the lock and dam a complete barrier to sea lamprey. Results also showed that existing traps are in good locations because 83%–96% of the population was vulnerable to existing traps. However, only 52%–69% of lampreys vulnerable to traps were caught, suggesting that traps can be improved. The approach used in this study was a novel use of Bayesian state-space models that may have broader applications, including evaluation of barriers for other invasive species (e.g., Asian carp (Hypophthalmichthys spp.)) and fish passage structures for other diadromous fishes.

  11. A technique for estimating the probability of radiation-stimulated failures of integrated microcircuits in low-intensity radiation fields: Application to the Spektr-R spacecraft

    NASA Astrophysics Data System (ADS)

    Popov, V. D.; Khamidullina, N. M.

    2006-10-01

    In developing radio-electronic devices (RED) of spacecraft operating in the fields of ionizing radiation in space, one of the most important problems is the correct estimation of their radiation tolerance. The “weakest link” in the element base of onboard microelectronic devices under radiation effect is the integrated microcircuits (IMC), especially of large scale (LSI) and very large scale (VLSI) degree of integration. The main characteristic of IMC, which is taken into account when making decisions on using some particular type of IMC in the onboard RED, is the probability of non-failure operation (NFO) at the end of the spacecraft’s lifetime. It should be noted that, until now, the NFO has been calculated only from the reliability characteristics, disregarding the radiation effect. This paper presents the so-called “reliability” approach to determination of radiation tolerance of IMC, which allows one to estimate the probability of non-failure operation of various types of IMC with due account of radiation-stimulated dose failures. The described technique is applied to RED onboard the Spektr-R spacecraft to be launched in 2007.

  12. Workshop on models to estimate military system probability of effect (P/sub E/) due to incident radiofrequency energy: Volume I, Findings and recommendations

    SciTech Connect

    Cabayan, H.S.

    1988-01-01

    The workshop on Models to Estimate Military System P/sub E/ (probability of effect) due to Incident Radio Frequency (RF) Energy was convened by Dr. John M. MacCallum, OUSDA (R and AT/EST), to assess the current state of the art and to evaluate the adequacy of ongoing effects assessment efforts to estimate P/sub E/. Approximately fifty people from government, industry, and academia attended the meeting. Specifically, the workshop addressed the following: current status of operations research models for assessing probability of effect (P/sub E/) for red and blue mission analyses; the main overall approaches for evaluating P/sub E/'s; sources of uncertainty and ways P/sub E/'s could be credibly derived from the existing data base; and the adequacy of the present framework of a national HPM assessment methodology for evaluation of P/sub E/'s credibility for future systems. Military operations research (MOR) analyses need to support current and future high power RF device development and operational employment evaluations. USDA (R and AT/EST) sponsored this workshop in an effort to assess MOR's current capability and its maturity. Participants included service, OSD, national laboratory, contractor, and academic experts and practitioners in this emerging technology area. Following is a summary of major findings and recommendations. 1 tab.

  13. Using probability-based spatial estimation of the river pollution index to assess urban water recreational quality in the Tamsui River watershed.

    PubMed

    Jang, Cheng-Shin

    2016-01-01

    The Tamsui River watershed situated in Northern Taiwan provides a variety of water recreational opportunities such as riverbank park activities, fishing, cruising, rowing, sailing, and swimming. However, river water quality strongly affects water recreational quality. Moreover, the health of recreationists who are partially or fully exposed to polluted river water may be jeopardized. A river pollution index (RPI) composed of dissolved oxygen, biochemical oxygen demand, suspended solids, and ammonia nitrogen is typically used to gauge the river water quality and regulate the water body use in Taiwan. The purpose of this study was to probabilistically determine the RPI categories in the Tamsui River watershed and to assess the urban water recreational quality on the basis of the estimated RPI categories. First, according to various RPI categories, one-dimensional indicator kriging (IK) was adopted to estimate the occurrence probabilities of the RPI categories. The maximum occurrence probability among the categories was then employed to determine the most suitable RPI category. Finally, the most serious categories and seasonal variations of RPI were adopted to evaluate the quality of current water recreational opportunities in the Tamsui River watershed. The results revealed that the midstream and downstream sections of the Tamsui River and its tributaries with poor river water quality afford low water recreational quality, and water recreationists should avoid full or limited exposure to these bodies of water. However, the upstream sections of the Tamsui River watershed with high river water quality are suitable for all water recreational activities. PMID:26676412

  14. Estimating the Transitional Probabilities of Smoking Stages with Cross-sectional Data and 10-Year Projection for Smoking Behavior in Iranian Adolescents

    PubMed Central

    Khosravi, Ahmad; Mansournia, Mohammad Ali; Mahmoodi, Mahmood; Pouyan, Ali Akbar; Holakouie-Naieni, Kourosh

    2016-01-01

    Background: Cigarette smoking is one of the most important health-related risk factors in terms of morbidity and mortality. In this study, we introduced a new method for deriving the transitional probabilities of smoking stages from a cross-sectional study and simulated a long-term smoking behavior for adolescents. Methods: In this study in 2010, a total of 4853 high school students were randomly selected and were completed a self-administered questionnaire about cigarette smoking. We used smoothed age- and sex-specific prevalence of smoking stages in a probabilistic discrete event system for estimating of transitional probabilities. A nonhomogenous discrete time Markov chain analysis was used to model the progression of the smoking in 10 years ahead in the same population. The mean age of the students was 15.69 ± 0.73 years (range: 14–19). Results: The smoothed prevalence proportion of current smoking varies between 3.58 and 26.14%. The age-adjusted odds of initiation in boys is 8.9 (95% confidence interval [CI]: 7.9–10.0) times of the odds of initiation of smoking in girls. Our study predicted that the prevalence proportion of current smokers increased from 7.55% in 2010 to 20.31% (95% CI: 19.44–21.37) for 2019. Conclusions: The present study showed a moderately but concerning prevalence of current smoking in Iranian adolescents and introduced a novel method for estimation of transitional probabilities from a cross-sectional study. The increasing trend of cigarette use among adolescents indicated the necessity of paying more attention to this group. PMID:27625766

  15. A computer procedure to analyze seismic data to estimate outcome probabilities in oil exploration, with an initial application in the tabasco region of southeastern Mexico

    NASA Astrophysics Data System (ADS)

    Berlanga, Juan M.; Harbaugh, John W.

    the basis of frequency distributions of trend-surface residuals obtained by fitting and subtracting polynomial trend surfaces from the machine-contoured reflection time maps. We found that there is a strong preferential relationship between the occurrence of petroleum (i.e. its presence versus absence) and particular ranges of trend-surface residual values. An estimate of the probability of oil occurring at any particular geographic point can be calculated on the basis of the estimated trend-surface residual value. This estimate, however, must be tempered by the probable error in the estimate of the residual value provided by the error function. The result, we believe, is a simple but effective procedure for estimating exploration outcome probabilities where seismic data provide the principal form of information in advance of drilling. Implicit in this approach is the comparison between a maturely explored area, for which both seismic and production data are available, and which serves as a statistical "training area", with the "target" area which is undergoing exploration and for which probability forecasts are to be calculated.

  16. A realistic lattice example

    SciTech Connect

    Courant, E.D.; Garren, A.A.

    1985-10-01

    A realistic, distributed interaction region (IR) lattice has been designed that includes new components discussed in the June 1985 lattice workshop. Unlike the test lattices, the lattice presented here includes utility straights and the mechanism for crossing the beams in the experimental straights. Moreover, both the phase trombones and the dispersion suppressors contain the same bending as the normal cells. Vertically separated beams and 6 Tesla, 1-in-1 magnets are assumed. Since the cells are 200 meters long, and have 60 degree phase advance, this lattice has been named RLD1, in analogy with the corresponding test lattice, TLD1. The quadrupole gradient is 136 tesla/meter in the cells, and has similar values in other quadrupoles except in those in the IR`s, where the maximum gradient is 245 tesla/meter. RLD1 has distributed IR`s; however, clustered realistic lattices can easily be assembled from the same components, as was recently done in a version that utilizes the same type of experimental and utility straights as those of RLD1.

  17. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  18. Estimating the probability of occurrence of earthquakes (M>6) in the Western part of the Corinth rift using fault-based and classical seismotectonic approaches.

    NASA Astrophysics Data System (ADS)

    Boiselet, Aurelien; Scotti, Oona; Lyon-Caen, Hélène

    2014-05-01

    -SISCOR Working Group. On the basis of this consensual logic tree, median probability of occurrences of M>=6 events were computed for the region of study. Time-dependent models (Brownian Passage time and Weibull probability distributions) were also explored. The probability of a M>=6.0 event is found to be greater in the western region compared to the eastern part of the Corinth rift, whether a fault-based or a classical seismotectonic approach is used. Percentile probability estimates are also provided to represent the range of uncertainties in the results. The percentile results show that, in general, probability estimates following the classical approach (based on the definition of seismotectonic source zones), cover the median values estimated following the fault-based approach. On the contrary, the fault-based approach in this region is still affected by a high degree of uncertainty, because of the poor constraints on the 3D geometries of the faults and the high uncertainties in their slip rates.

  19. Estimated probability density functions for the times between flashes in the storms of 12 September 1975, 26 August 1975, and 13 July 1976

    NASA Technical Reports Server (NTRS)

    Tretter, S. A.

    1977-01-01

    A report is given to supplement the progress report of June 17, 1977. In that progress report gamma, lognormal, and Rayleigh probability density functions were fitted to the times between lightning flashes in the storms of 9/12/75, 8/26/75, and 7/13/76 by the maximum likelihood method. The goodness of fit is checked by the Kolmogoroff-Smirnoff test. Plots of the estimated densities along with normalized histograms are included to provide a visual check on the goodness of fit. The lognormal densities are the most peaked and have the highest tails. This results in the best fit to the normalized histogram in most cases. The Rayleigh densities have too broad and rounded peaks to give good fits. In addition, they have the lowest tails. The gamma densities fall inbetween and give the best fit in a few cases.

  20. Hate crimes and stigma-related experiences among sexual minority adults in the United States: prevalence estimates from a national probability sample.

    PubMed

    Herek, Gregory M

    2009-01-01

    Using survey responses collected via the Internet from a U.S. national probability sample of gay, lesbian, and bisexual adults (N = 662), this article reports prevalence estimates of criminal victimization and related experiences based on the target's sexual orientation. Approximately 20% of respondents reported having experienced a person or property crime based on their sexual orientation; about half had experienced verbal harassment, and more than 1 in 10 reported having experienced employment or housing discrimination. Gay men were significantly more likely than lesbians or bisexuals to experience violence and property crimes. Employment and housing discrimination were significantly more likely among gay men and lesbians than among bisexual men and women. Implications for future research and policy are discussed. PMID:18391058

  1. Why Probability?

    ERIC Educational Resources Information Center

    Weatherly, Myra S.

    1984-01-01

    Instruction in mathematical probability to enhance higher levels of critical and creative thinking with gifted students is described. Among thinking skills developed by such an approach are analysis, synthesis, evaluation, fluency, and complexity. (CL)

  2. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  3. Comparison of an automated Most Probable Number (MPN) technique to traditional plating methods for estimating populations of total aerobes, coliforms and E. coli associated with freshly processed broiler chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Traditional microbiological techniques for estimating populations of viable bacteria can be laborious and time consuming. The Most Probable Number (MPN) technique is especially tedious as multiple series of tubes must be inoculated at several different dilutions. Recently, an instrument (TEMPOTM) ...

  4. Estimated Probability of Post-Wildfire Debris-Flow Occurrence and Estimated Volume of Debris Flows from a Pre-Fire Analysis in the Three Lakes Watershed, Grand County, Colorado

    USGS Publications Warehouse

    Stevens, Michael R.; Bossong, Clifford R.; Litke, David W.; Viger, Roland J.; Rupert, Michael G.; Char, Stephen J.

    2008-01-01

    Debris flows pose substantial threats to life, property, infrastructure, and water resources. Post-wildfire debris flows may be of catastrophic proportions compared to debris flows occurring in unburned areas. During 2006, the U.S. Geological Survey (USGS), in cooperation with the Northern Colorado Water Conservancy District, initiated a pre-wildfire study to determine the potential for post-wildfire debris flows in the Three Lakes watershed, Grand County, Colorado. The objective was to estimate the probability of post-wildfire debris flows and to estimate the approximate volumes of debris flows from 109 subbasins in the Three Lakes watershed in order to provide the Northern Colorado Water Conservancy District with a relative measure of which subbasins might constitute the most serious debris flow hazards. This report describes the results of the study and provides estimated probabilities of debris-flow occurrence and the estimated volumes of debris flow that could be produced in 109 subbasins of the watershed under an assumed moderate- to high-burn severity of all forested areas. The estimates are needed because the Three Lakes watershed includes communities and substantial water-resources and water-supply infrastructure that are important to residents both east and west of the Continental Divide. Using information provided in this report, land and water-supply managers can consider where to concentrate pre-wildfire planning, pre-wildfire preparedness, and pre-wildfire mitigation in advance of wildfires. Also, in the event of a large wildfire, this information will help managers identify the watersheds with the greatest post-wildfire debris-flow hazards.

  5. A Unique Equation to Estimate Flash Points of Selected Pure Liquids Application to the Correction of Probably Erroneous Flash Point Values

    NASA Astrophysics Data System (ADS)

    Catoire, Laurent; Naudet, Valérie

    2004-12-01

    A simple empirical equation is presented for the estimation of closed-cup flash points for pure organic liquids. Data needed for the estimation of a flash point (FP) are the normal boiling point (Teb), the standard enthalpy of vaporization at 298.15 K [ΔvapH°(298.15 K)] of the compound, and the number of carbon atoms (n) in the molecule. The bounds for this equation are: -100⩽FP(°C)⩽+200; 250⩽Teb(K)⩽650; 20⩽Δvap H°(298.15 K)/(kJ mol-1)⩽110; 1⩽n⩽21. Compared to other methods (empirical equations, structural group contribution methods, and neural network quantitative structure-property relationships), this simple equation is shown to predict accurately the flash points for a variety of compounds, whatever their chemical groups (monofunctional compounds and polyfunctional compounds) and whatever their structure (linear, branched, cyclic). The same equation is shown to be valid for hydrocarbons, organic nitrogen compounds, organic oxygen compounds, organic sulfur compounds, organic halogen compounds, and organic silicone compounds. It seems that the flash points of organic deuterium compounds, organic tin compounds, organic nickel compounds, organic phosphorus compounds, organic boron compounds, and organic germanium compounds can also be predicted accurately by this equation. A mean absolute deviation of about 3 °C, a standard deviation of about 2 °C, and a maximum absolute deviation of 10 °C are obtained when predictions are compared to experimental data for more than 600 compounds. For all these compounds, the absolute deviation is equal or lower than the reproductibility expected at a 95% confidence level for closed-cup flash point measurement. This estimation technique has its limitations concerning the polyhalogenated compounds for which the equation should be used with caution. The mean absolute deviation and maximum absolute deviation observed and the fact that the equation provides unbiaised predictions lead to the conclusion that

  6. Application of Quantum Probability

    NASA Astrophysics Data System (ADS)

    Bohdalová, Mária; Kalina, Martin; Nánásiová, Ol'ga

    2009-03-01

    This is the first attempt to smooth time series using estimators with applying quantum probability with causality (non-commutative s-maps on an othomodular lattice). In this context it means that we use non-symmetric covariance matrix to construction of our estimator.

  7. A Posteriori Transit Probabilities

    NASA Astrophysics Data System (ADS)

    Stevens, Daniel J.; Gaudi, B. Scott

    2013-08-01

    Given the radial velocity (RV) detection of an unseen companion, it is often of interest to estimate the probability that the companion also transits the primary star. Typically, one assumes a uniform distribution for the cosine of the inclination angle i of the companion's orbit. This yields the familiar estimate for the prior transit probability of ~Rlowast/a, given the primary radius Rlowast and orbital semimajor axis a, and assuming small companions and a circular orbit. However, the posterior transit probability depends not only on the prior probability distribution of i but also on the prior probability distribution of the companion mass Mc, given a measurement of the product of the two (the minimum mass Mc sin i) from an RV signal. In general, the posterior can be larger or smaller than the prior transit probability. We derive analytic expressions for the posterior transit probability assuming a power-law form for the distribution of true masses, dΓ/dMcvpropMcα, for integer values -3 <= α <= 3. We show that for low transit probabilities, these probabilities reduce to a constant multiplicative factor fα of the corresponding prior transit probability, where fα in general depends on α and an assumed upper limit on the true mass. The prior and posterior probabilities are equal for α = -1. The posterior transit probability is ~1.5 times larger than the prior for α = -3 and is ~4/π times larger for α = -2, but is less than the prior for α>=0, and can be arbitrarily small for α > 1. We also calculate the posterior transit probability in different mass regimes for two physically-motivated mass distributions of companions around Sun-like stars. We find that for Jupiter-mass planets, the posterior transit probability is roughly equal to the prior probability, whereas the posterior is likely higher for Super-Earths and Neptunes (10 M⊕ - 30 M⊕) and Super-Jupiters (3 MJup - 10 MJup), owing to the predicted steep rise in the mass function toward smaller

  8. Comparison of an automated Most Probable Number (MPN) technique to traditional plating methods for estimating populations of total aerobes, coliforms and E. coli associated with freshly processed broiler chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recently, an instrument (TEMPOTM) has been developed to automate the Most Probable Number (MPN) technique and reduce the effort required to estimate some bacterial populations. We compared the automated MPN technique to traditional microbiological plating methods or PetrifilmTM for estimating the t...

  9. Co-activation Probability Estimation (CoPE): An approach for modeling functional co-activation architecture based on neuroimaging coordinates.

    PubMed

    Chu, Congying; Fan, Lingzhong; Eickhoff, Claudia R; Liu, Yong; Yang, Yong; Eickhoff, Simon B; Jiang, Tianzi

    2015-08-15

    Recent progress in functional neuroimaging has prompted studies of brain activation during various cognitive tasks. Coordinate-based meta-analysis has been utilized to discover the brain regions that are consistently activated across experiments. However, within-experiment co-activation relationships, which can reflect the underlying functional relationships between different brain regions, have not been widely studied. In particular, voxel-wise co-activation, which may be able to provide a detailed configuration of the co-activation network, still needs to be modeled. To estimate the voxel-wise co-activation pattern and deduce the co-activation network, a Co-activation Probability Estimation (CoPE) method was proposed to model within-experiment activations for the purpose of defining the co-activations. A permutation test was adopted as a significance test. Moreover, the co-activations were automatically separated into local and long-range ones, based on distance. The two types of co-activations describe distinct features: the first reflects convergent activations; the second represents co-activations between different brain regions. The validation of CoPE was based on five simulation tests and one real dataset derived from studies of working memory. Both the simulated and the real data demonstrated that CoPE was not only able to find local convergence but also significant long-range co-activation. In particular, CoPE was able to identify a 'core' co-activation network in the working memory dataset. As a data-driven method, the CoPE method can be used to mine underlying co-activation relationships across experiments in future studies. PMID:26037052

  10. GIS-based estimation of the winter storm damage probability in forests: a case study from Baden-Wuerttemberg (Southwest Germany).

    PubMed

    Schindler, Dirk; Grebhan, Karin; Albrecht, Axel; Schönborn, Jochen; Kohnle, Ulrich

    2012-01-01

    Data on storm damage attributed to the two high-impact winter storms 'Wiebke' (28 February 1990) and 'Lothar' (26 December 1999) were used for GIS-based estimation and mapping (in a 50 × 50 m resolution grid) of the winter storm damage probability (P(DAM)) for the forests of the German federal state of Baden-Wuerttemberg (Southwest Germany). The P(DAM)-calculation was based on weights of evidence (WofE) methodology. A combination of information on forest type, geology, soil type, soil moisture regime, and topographic exposure, as well as maximum gust wind speed field was used to compute P(DAM) across the entire study area. Given the condition that maximum gust wind speed during the two storm events exceeded 35 m s(-1), the highest P(DAM) values computed were primarily where coniferous forest grows in severely exposed areas on temporarily moist soils on bunter sandstone formations. Such areas are found mainly in the mountainous ranges of the northern Black Forest, the eastern Forest of Odes, in the Virngrund area, and in the southwestern Alpine Foothills. PMID:21207068

  11. Probability Estimates of Solar Particle Event Doses During a Period of Low Sunspot Number for Thinly-Shielded Spacecraft and Short Duration Missions

    NASA Technical Reports Server (NTRS)

    Atwell, William; Tylka, Allan J.; Dietrich, William; Rojdev, Kristina; Matzkind, Courtney

    2016-01-01

    In an earlier paper (Atwell, et al., 2015), we investigated solar particle event (SPE) radiation exposures (absorbed dose) to small, thinly-shielded spacecraft during a period when the sunspot number (SSN) was less than 30. These SPEs contain Ground Level Events (GLE), sub-GLEs, and sub-sub-GLEs (Tylka and Dietrich, 2009, Tylka and Dietrich, 2008, and Atwell, et al., 2008). GLEs are extremely energetic solar particle events having proton energies extending into the several GeV range and producing secondary particles in the atmosphere, mostly neutrons, observed with ground station neutron monitors. Sub-GLE events are less energetic, extending into the several hundred MeV range, but do not produce secondary atmospheric particles. Sub-sub GLEs are even less energetic with an observable increase in protons at energies greater than 30 MeV, but no observable proton flux above 300 MeV. In this paper, we consider those SPEs that occurred during 1973-2010 when the SSN was greater than 30 but less than 50. In addition, we provide probability estimates of absorbed dose based on mission duration with a 95% confidence level (CL). We also discuss the implications of these data and provide some recommendations that may be useful to spacecraft designers of these smaller spacecraft.

  12. Can "realist" randomised controlled trials be genuinely realist?

    PubMed

    Van Belle, Sara; Wong, Geoff; Westhorp, Gill; Pearson, Mark; Emmel, Nick; Manzano, Ana; Marchal, Bruno

    2016-01-01

    In this paper, we respond to a paper by Jamal and colleagues published in Trials in October 2015 and take an opportunity to continue the much-needed debate about what applied scientific realism is. The paper by Jamal et al. is useful because it exposes the challenges of combining a realist evaluation approach (as developed by Pawson and Tilley) with the randomised controlled trial (RCT) design.We identified three fundamental differences that are related to paradigmatic differences in the treatment of causation between post-positivist and realist logic: (1) the construct of mechanism, (2) the relation between mediators and moderators on one hand and mechanisms and contexts on the other hand, and (3) the variable-oriented approach to analysis of causation versus the configurational approach.We show how Jamal et al. consider mechanisms as observable, external treatments and how their approach reduces complex causal processes to variables. We argue that their proposed RCT design cannot provide a truly realist understanding. Not only does the proposed realist RCT design not deal with the RCT's inherent inability to "unpack" complex interventions, it also does not enable the identification of the dynamic interplay among the intervention, actors, context, mechanisms and outcomes, which is at the core of realist research. As a result, the proposed realist RCT design is not, as we understand it, genuinely realist in nature. PMID:27387202

  13. Realist evaluation: an immanent critique.

    PubMed

    Porter, Sam

    2015-10-01

    This paper critically analyses realist evaluation, focussing on its primary analytical concepts: mechanisms, contexts, and outcomes. Noting that nursing investigators have had difficulty in operationalizing the concepts of mechanism and context, it is argued that their confusion is at least partially the result of ambiguities, inconsistencies, and contradictions in the realist evaluation model. Problematic issues include the adoption of empiricist and idealist positions, oscillation between determinism and voluntarism, subsumption of agency under structure, and categorical confusion between context and mechanism. In relation to outcomes, it is argued that realist evaluation's adoption of the fact/value distinction prevents it from taking into account the concerns of those affected by interventions. The aim of the paper is to use these immanent critiques of realist evaluation to construct an internally consistent realist approach to evaluation that is more amenable to being operationalized by nursing researchers. PMID:26392234

  14. Normal tissue complication probability model parameter estimation for xerostomia in head and neck cancer patients based on scintigraphy and quality of life assessments

    PubMed Central

    2012-01-01

    Background With advances in modern radiotherapy (RT), many patients with head and neck (HN) cancer can be effectively cured. However, xerostomia is a common complication in patients after RT for HN cancer. The purpose of this study was to use the Lyman–Kutcher–Burman (LKB) model to derive parameters for the normal tissue complication probability (NTCP) for xerostomia based on scintigraphy assessments and quality of life (QoL) questionnaires. We performed validation tests of the Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC) guidelines against prospectively collected QoL and salivary scintigraphic data. Methods Thirty-one patients with HN cancer were enrolled. Salivary excretion factors (SEFs) measured by scintigraphy and QoL data from self-reported questionnaires were used for NTCP modeling to describe the incidence of grade 3+ xerostomia. The NTCP parameters estimated from the QoL and SEF datasets were compared. Model performance was assessed using Pearson’s chi-squared test, Nagelkerke’s R2, the area under the receiver operating characteristic curve, and the Hosmer–Lemeshow test. The negative predictive value (NPV) was checked for the rate of correctly predicting the lack of incidence. Pearson’s chi-squared test was used to test the goodness of fit and association. Results Using the LKB NTCP model and assuming n=1, the dose for uniform irradiation of the whole or partial volume of the parotid gland that results in 50% probability of a complication (TD50) and the slope of the dose–response curve (m) were determined from the QoL and SEF datasets, respectively. The NTCP-fitted parameters for local disease were TD50=43.6 Gy and m=0.18 with the SEF data, and TD50=44.1 Gy and m=0.11 with the QoL data. The rate of grade 3+ xerostomia for treatment plans meeting the QUANTEC guidelines was specifically predicted, with a NPV of 100%, using either the QoL or SEF dataset. Conclusions Our study shows the agreement between the NTCP

  15. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  16. A practical illustration of the importance of realistic individualized treatment rules in causal inference.

    PubMed

    Bembom, Oliver; van der Laan, Mark J

    2007-01-01

    The effect of vigorous physical activity on mortality in the elderly is difficult to estimate using conventional approaches to causal inference that define this effect by comparing the mortality risks corresponding to hypothetical scenarios in which all subjects in the target population engage in a given level of vigorous physical activity. A causal effect defined on the basis of such a static treatment intervention can only be identified from observed data if all subjects in the target population have a positive probability of selecting each of the candidate treatment options, an assumption that is highly unrealistic in this case since subjects with serious health problems will not be able to engage in higher levels of vigorous physical activity. This problem can be addressed by focusing instead on causal effects that are defined on the basis of realistic individualized treatment rules and intention-to-treat rules that explicitly take into account the set of treatment options that are available to each subject. We present a data analysis to illustrate that estimators of static causal effects in fact tend to overestimate the beneficial impact of high levels of vigorous physical activity while corresponding estimators based on realistic individualized treatment rules and intention-to-treat rules can yield unbiased estimates. We emphasize that the problems encountered in estimating static causal effects are not restricted to the IPTW estimator, but are also observed with the G-computation estimator, the DR-IPTW estimator, and the targeted MLE. Our analyses based on realistic individualized treatment rules and intention-to-treat rules suggest that high levels of vigorous physical activity may confer reductions in mortality risk on the order of 15-30%, although in most cases the evidence for such an effect does not quite reach the 0.05 level of significance. PMID:19079799

  17. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  18. Computation of Most Probable Numbers

    PubMed Central

    Russek, Estelle; Colwell, Rita R.

    1983-01-01

    A rapid computational method for maximum likelihood estimation of most-probable-number values, incorporating a modified Newton-Raphson method, is presented. The method offers a much greater reliability for the most-probable-number estimate of total viable bacteria, i.e., those capable of growth in laboratory media. PMID:6870242

  19. Generating realistic images using Kray

    NASA Astrophysics Data System (ADS)

    Tanski, Grzegorz

    2004-07-01

    Kray is an application for creating realistic images. It is written in C++ programming language, has a text-based interface, solves global illumination problem using techniques such as radiosity, path tracing and photon mapping.

  20. RAMESES publication standards: realist syntheses

    PubMed Central

    2013-01-01

    Background There is growing interest in realist synthesis as an alternative systematic review method. This approach offers the potential to expand the knowledge base in policy-relevant areas - for example, by explaining the success, failure or mixed fortunes of complex interventions. No previous publication standards exist for reporting realist syntheses. This standard was developed as part of the RAMESES (Realist And MEta-narrative Evidence Syntheses: Evolving Standards) project. The project's aim is to produce preliminary publication standards for realist systematic reviews. Methods We (a) collated and summarized existing literature on the principles of good practice in realist syntheses; (b) considered the extent to which these principles had been followed by published syntheses, thereby identifying how rigor may be lost and how existing methods could be improved; (c) used a three-round online Delphi method with an interdisciplinary panel of national and international experts in evidence synthesis, realist research, policy and/or publishing to produce and iteratively refine a draft set of methodological steps and publication standards; (d) provided real-time support to ongoing realist syntheses and the open-access RAMESES online discussion list so as to capture problems and questions as they arose; and (e) synthesized expert input, evidence syntheses and real-time problem analysis into a definitive set of standards. Results We identified 35 published realist syntheses, provided real-time support to 9 on-going syntheses and captured questions raised in the RAMESES discussion list. Through analysis and discussion within the project team, we summarized the published literature and common questions and challenges into briefing materials for the Delphi panel, comprising 37 members. Within three rounds this panel had reached consensus on 19 key publication standards, with an overall response rate of 91%. Conclusion This project used multiple sources to develop and

  1. The Challenge of Realistic TPV System Modeling

    NASA Astrophysics Data System (ADS)

    Aschaber, J.; Hebling, C.; Luther, J.

    2003-01-01

    Realistic modeling of a TPV system is a very demanding task. For a rough estimation of system limits many of assumptions simplify the complexity of a thermophotovoltaic converter. It's obvious that real systems can not be described by this way. An alternative approach that can deal with all these complexities like arbitrary geometries, participating media, temperature distributions etc. is the Monte Carlo method (MCM). This statistical method simulates radiative energy transfer by tracking the histories of a number of photons beginning with the emission by a radiating surface and ending with absorption on a surface or in a medium. All interactions in this way are considered. The disadvantage of large computation time compared to other methods is not longer a weakness with the speed of todays computers. This article points out different ways for realistic TPV system simulation focusing on statistical methods.

  2. Realistic Covariance Prediction for the Earth Science Constellation

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellation (ESC) include collision risk assessment between members of the constellation and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed using Monte Carlo techniques as well as by numerically integrating relative state probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by the NASA/Goddard Space Flight Center's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the Earth Science Constellation satellites: Aqua, Aura and Terra.

  3. Realistic Covariance Prediction For the Earth Science Constellations

    NASA Technical Reports Server (NTRS)

    Duncan, Matthew; Long, Anne

    2006-01-01

    Routine satellite operations for the Earth Science Constellations (ESC) include collision risk assessment between members of the constellations and other orbiting space objects. One component of the risk assessment process is computing the collision probability between two space objects. The collision probability is computed via Monte Carlo techniques as well as numerically integrating relative probability density functions. Each algorithm takes as inputs state vector and state vector uncertainty information for both objects. The state vector uncertainty information is expressed in terms of a covariance matrix. The collision probability computation is only as good as the inputs. Therefore, to obtain a collision calculation that is a useful decision-making metric, realistic covariance matrices must be used as inputs to the calculation. This paper describes the process used by NASA Goddard's Earth Science Mission Operations Project to generate realistic covariance predictions for three of the ESC satellites: Aqua, Aura, and Terra

  4. Design and Analysis of Salmonid Tagging Studies in the Columbia Basin, Volume VIII; New Model for Estimating Survival Probabilities and Residualization from a Release-Recapture Study of Fall Chinook Salmon Smolts in the Snake River, 1995 Technical Report.

    SciTech Connect

    Lowther, Alan B.; Skalski, John R.

    1997-09-01

    Standard release-recapture analysis using Cormack-Jolly-Seber (CJS) models to estimate survival probabilities between hydroelectric facilities for Snake River fall chinook salmon (Oncorhynchus tschawytscha) ignore the possibility of individual fish residualizing and completing their migration in the year following tagging.

  5. Determining Type Ia Supernova Host Galaxy Extinction Probabilities and a Statistical Approach to Estimating the Absorption-to-reddening Ratio RV

    NASA Astrophysics Data System (ADS)

    Cikota, Aleksandar; Deustua, Susana; Marleau, Francine

    2016-03-01

    We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B - V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, RV, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B - V) with RV = 3.1 and investigate the color excess probabilities E(B - V) with projected radial galaxy center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa-Sap, Sab-Sbp, Sbc-Scp, Scd-Sdm, S0, and irregular galaxy classes as a function of R/R25. We find that the largest expected reddening probabilities are in Sab-Sb and Sbc-Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio RV using color excess probability functions and find values of RV = 2.71 ± 1.58 for 21 SNe Ia observed in Sab-Sbp galaxies, and RV = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc-Scp galaxies.

  6. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  7. Anderson localization for chemically realistic systems

    NASA Astrophysics Data System (ADS)

    Terletska, Hanna

    2015-03-01

    Disorder which is ubiquitous for most materials can strongly effect their properties. It may change their electronic structures or even cause their localization, known as Anderson localization. Although, substantial progress has been achieved in the description of the Anderson localization, a proper mean-field theory of this phenomenon for more realistic systems remains elusive. Commonly used theoretical methods such as the coherent potential approximation and its cluster extensions fail to describe the Anderson transition, as the average density of states (DOS) employed in such theories is not critical at the transition. However, near the transition, due to the spatial confinement of carriers, the local DOS becomes highly skewed with a log-normal distribution, for which the most probable and the typical values differ noticeably from the average value. Dobrosavljevic et.al., incorporated such ideas in their typical medium theory (TMT), and showed that the typical (not average) DOS is critical at the transition. While the TMT is able to capture the localized states, as a local single site theory it still has several drawbacks. For the disorder Anderson model in three dimension it underestimates the critical disorder strength, and fails to capture the re-entrance behavior of the mobility edge. We have recently developed a cluster extension of the TMT, which addresses these drawbacks by systematically incorporating non-local corrections. This approach converges quickly with cluster size and allows us to incorporate the effect of interactions and realistic electronic structure. As the first steps towards realistic material modeling, we extended our TMDCA formalisms to systems with the off diagonal disorder and multiple bands structures. We also applied our TMDCA scheme to systems with both disorder and interactions and found that correlations effects tend to stabilize the metallic behavior even in two dimensions. This work was supported by DOE SciDAC Grant No. DE-FC02

  8. My lived experiences are more important than your probabilities: The role of individualized risk estimates for decision making about participation in the Study of Tamoxifen and Raloxifene (STAR)

    PubMed Central

    Holmberg, Christine; Waters, Erika A.; Whitehouse, Katie; Daly, Mary; McCaskill-Stevens, Worta

    2015-01-01

    Background Decision making experts emphasize that understanding and using probabilistic information is important for making informed decisions about medical treatments involving complex risk-benefit tradeoffs. Yet empirical research demonstrates that individuals may not use probabilities when making decisions. Objectives To explore decision making and the use of probabilities for decision making from the perspective of women who were risk-eligible to enroll in the Study of Tamoxifen and Raloxifene (STAR). Methods We conducted narrative interviews with 20 women who agreed to participate in STAR and 20 women who declined. The project was based on a narrative approach. Analysis included the development of summaries of each narrative, and thematic analysis with developing a coding scheme inductively to code all transcripts to identify emerging themes. Results Interviewees explained and embedded their STAR decisions within experiences encountered throughout their lives. Such lived experiences included but were not limited to breast cancer family history, personal history of breast biopsies, and experiences or assumptions about taking tamoxifen or medicines more generally. Conclusions Women’s explanations of their decisions about participating in a breast cancer chemoprevention trial were more complex than decision strategies that rely solely on a quantitative risk-benefit analysis of probabilities derived from populations In addition to precise risk information, clinicians and risk communicators should recognize the importance and legitimacy of lived experience in individual decision making. PMID:26183166

  9. Minimizing the probable maximum flood

    SciTech Connect

    Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )

    1994-06-01

    This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.

  10. Probability of sea level rise

    SciTech Connect

    Titus, J.G.; Narayanan, V.K.

    1995-10-01

    The report develops probability-based projections that can be added to local tide-gage trends to estimate future sea level at particular locations. It uses the same models employed by previous assessments of sea level rise. The key coefficients in those models are based on subjective probability distributions supplied by a cross-section of climatologists, oceanographers, and glaciologists.

  11. Evaluation of microbial release probabilities

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Work undertaken to improve the estimation of the probability of release of microorganisms from unmanned Martian landing spacecraft is summarized. An analytical model is described for the development of numerical values for release parameters and release mechanisms applicable to flight missions are defined. Laboratory test data are used to evolve parameter values for use by flight projects in estimating numerical values for release probabilities. The analysis treats microbial burden located on spacecraft surfaces, between mated surfaces, and encapsulated within materials.

  12. Estimating the probability of elevated nitrate (NO2+NO3-N) concentrations in ground water in the Columbia Basin Ground Water Management Area, Washington

    USGS Publications Warehouse

    Frans, Lonna M.

    2000-01-01

    Logistic regression was used to relate anthropogenic (man-made) and natural factors to the occurrence of elevated concentrations of nitrite plus nitrate as nitrogen in ground water in the Columbia Basin Ground Water Management Area, eastern Washington. Variables that were analyzed included well depth, depth of well casing, ground-water recharge rates, presence of canals, fertilizer application amounts, soils, surficial geology, and land-use types. The variables that best explain the occurrence of nitrate concentrations above 3 milligrams per liter in wells were the amount of fertilizer applied annually within a 2-kilometer radius of a well and the depth of the well casing; the variables that best explain the occurrence of nitrate above 10 milligrams per liter included the amount of fertilizer applied annually within a 3-kilometer radius of a well, the depth of the well casing, and the mean soil hydrologic group, which is a measure of soil infiltration rate. Based on the relations between these variables and elevated nitrate concentrations, models were developed using logistic regression that predict the probability that ground water will exceed a nitrate concentration of either 3 milligrams per liter or 10 milligrams per liter. Maps were produced that illustrate the predicted probability that ground-water nitrate concentrations will exceed 3 milligrams per liter or 10 milligrams per liter for wells cased to 78 feet below land surface (median casing depth) and the predicted depth to which wells would need to be cased in order to have an 80-percent probability of drawing water with a nitrate concentration below either 3 milligrams per liter or 10 milligrams per liter. Maps showing the predicted probability for the occurrence of elevated nitrate concentrations indicate that the irrigated agricultural regions are most at risk. The predicted depths to which wells need to be cased in order to have an 80-percent chance of obtaining low nitrate ground water exceed 600 feet

  13. Exploring the Overestimation of Conjunctive Probabilities

    PubMed Central

    Nilsson, Håkan; Rieskamp, Jörg; Jenny, Mirjam A.

    2013-01-01

    People often overestimate probabilities of conjunctive events. The authors explored whether the accuracy of conjunctive probability estimates can be improved by increased experience with relevant constituent events and by using memory aids. The first experiment showed that increased experience with constituent events increased the correlation between the estimated and the objective conjunctive probabilities, but that it did not reduce overestimation of conjunctive probabilities. The second experiment showed that reducing cognitive load with memory aids for the constituent probabilities led to improved estimates of the conjunctive probabilities and to decreased overestimation of conjunctive probabilities. To explain the cognitive process underlying people’s probability estimates, the configural weighted average model was tested against the normative multiplicative model. The configural weighted average model generates conjunctive probabilities that systematically overestimate objective probabilities although the generated probabilities still correlate strongly with the objective probabilities. For the majority of participants this model was better than the multiplicative model in predicting the probability estimates. However, when memory aids were provided, the predictive accuracy of the multiplicative model increased. In sum, memory tools can improve people’s conjunctive probability estimates. PMID:23460026

  14. The probabilities of unique events.

    PubMed

    Khemlani, Sangeet S; Lotstein, Max; Johnson-Laird, Phil

    2012-01-01

    Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224

  15. The Probabilities of Unique Events

    PubMed Central

    Khemlani, Sangeet S.; Lotstein, Max; Johnson-Laird, Phil

    2012-01-01

    Many theorists argue that the probabilities of unique events, even real possibilities such as President Obama's re-election, are meaningless. As a consequence, psychologists have seldom investigated them. We propose a new theory (implemented in a computer program) in which such estimates depend on an intuitive non-numerical system capable only of simple procedures, and a deliberative system that maps intuitions into numbers. The theory predicts that estimates of the probabilities of conjunctions should often tend to split the difference between the probabilities of the two conjuncts. We report two experiments showing that individuals commit such violations of the probability calculus, and corroborating other predictions of the theory, e.g., individuals err in the same way even when they make non-numerical verbal estimates, such as that an event is highly improbable. PMID:23056224

  16. A more realistic approach to the cumulative pregnancy rate after in-vitro fertilization.

    PubMed

    Stolwijk, A M; Hamilton, C J; Hollanders, J M; Bastiaans, L A; Zielhuis, G A

    1996-03-01

    As most studies overestimate the cumulative pregnancy rate, a method is proposed to estimate a more realistic cumulative pregnancy rate by taking into account the reasons for an early cessation of treatment with in-vitro fertilization (IVF). Three methods for calculating cumulative pregnancy rates were compared. The first method assumed that those who stopped treatment had no chance at all of pregnancy. The second method, the one used most often, assumed the same probability of pregnancy for those who stopped as for those who continued. The third method assumed that only those who stopped treatment, because of a medical indication, had no chance at all of pregnancy and that the others who stopped had the same probability of pregnancy as those who continued treatment. Data were used from 616 women treated at the University Hospital Nijmegen, Nijmegen, The Netherlands. The cumulative pregnancy rates after five initiated IVF cycles for the three calculation methods were in the ranges 37-51% for the positive pregnancy test result, 33-55% for a clinical pregnancy and 30-56% for an ongoing pregnancy. As expected, the first method underestimated the cumulative pregnancy rate and the second overestimated it. The third method produced the most realistic cumulative pregnancy rates. PMID:8671287

  17. Simulation of realistic retinoscopic measurement

    NASA Astrophysics Data System (ADS)

    Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.

    2007-03-01

    Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.

  18. Electromagnetic Scattering from Realistic Targets

    NASA Technical Reports Server (NTRS)

    Lee, Shung- Wu; Jin, Jian-Ming

    1997-01-01

    The general goal of the project is to develop computational tools for calculating radar signature of realistic targets. A hybrid technique that combines the shooting-and-bouncing-ray (SBR) method and the finite-element method (FEM) for the radiation characterization of microstrip patch antennas in a complex geometry was developed. In addition, a hybridization procedure to combine moment method (MoM) solution and the SBR method to treat the scattering of waveguide slot arrays on an aircraft was developed. A list of journal articles and conference papers is included.

  19. Imprecise probabilities in engineering analyses

    NASA Astrophysics Data System (ADS)

    Beer, Michael; Ferson, Scott; Kreinovich, Vladik

    2013-05-01

    Probabilistic uncertainty and imprecision in structural parameters and in environmental conditions and loads are challenging phenomena in engineering analyses. They require appropriate mathematical modeling and quantification to obtain realistic results when predicting the behavior and reliability of engineering structures and systems. But the modeling and quantification is complicated by the characteristics of the available information, which involves, for example, sparse data, poor measurements and subjective information. This raises the question whether the available information is sufficient for probabilistic modeling or rather suggests a set-theoretical approach. The framework of imprecise probabilities provides a mathematical basis to deal with these problems which involve both probabilistic and non-probabilistic information. A common feature of the various concepts of imprecise probabilities is the consideration of an entire set of probabilistic models in one analysis. The theoretical differences between the concepts mainly concern the mathematical description of the set of probabilistic models and the connection to the probabilistic models involved. This paper provides an overview on developments which involve imprecise probabilities for the solution of engineering problems. Evidence theory, probability bounds analysis with p-boxes, and fuzzy probabilities are discussed with emphasis on their key features and on their relationships to one another. This paper was especially prepared for this special issue and reflects, in various ways, the thinking and presentation preferences of the authors, who are also the guest editors for this special issue.

  20. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013

    USGS Publications Warehouse

    Eash, David A.

    2015-01-01

    An examination was conducted to understand why the 1987 single-variable RREs seem to provide better accuracy and less bias than either of the 2013 multi- or single-variable RREs. A comparison of 1-percent annual exceedance-probability regression lines for hydrologic regions 1-4 from the 1987 single-variable RREs and for flood regions 1-3 from the 2013 single-variable RREs indicates that the 1987 single-variable regional-regression lines generally have steeper slopes and lower discharges when compared to 2013 single-variable regional-regression lines for corresponding areas of Iowa. The combination of the definition of hydrologic regions, the lower discharges, and the steeper slopes of regression lines associated with the 1987 single-variable RREs seem to provide better accuracy and less bias when compared to the 2013 multi- or single-variable RREs; better accuracy and less bias was determined particularly for drainage areas less than 2 mi2, and also for some drainage areas between 2 and 20 mi2. The 2013 multi- and single-variable RREs are considered to provide better accuracy and less bias for larger drainage areas. Results of this study indicate that additional research is needed to address the curvilinear relation between drainage area and AEPDs for areas of Iowa.

  1. Electrokinetic transport in realistic nanochannels

    NASA Astrophysics Data System (ADS)

    Wang, Moran; Liu, Jin; Kang, Qinjun

    2009-11-01

    When an electrolyte solution contacts with a solid surface, the surface will likely be charged through an electrochemical adsorption process. The surface charge in general varies with the local bulk ionic concentration, the pH value and the temperature of the solution, and even with the double layer interactions in the narrow channel. Most of the previous studies are based on a constant zeta potential or surface charge density assumption, which does not reflect the realistic charge status at interfaces and may lead to inaccurate predictions. In this work, we first develop a generalized model for electrochemical boundary conditions on solid-liquid interfaces, which can closely approximate the known experimental properties. We further present nonequilibrium molecular dynamic (NEMD) simulations of electrokinetic transport in nanochannels. We take silica and carbon as examples of channel materials. Both monovalent and multivalent ionic solutions are considered. The electrokinetic transport properties for realistic nanochannels are therefore studied and a multiscale analysis for a new energy conversion device is performed.

  2. Arbuscular mycorrhizal propagules in soils from a tropical forest and an abandoned cornfield in Quintana Roo, Mexico: visual comparison of most-probable-number estimates.

    PubMed

    Ramos-Zapata, José A; Guadarrama, Patricia; Navarro-Alberto, Jorge; Orellana, Roger

    2011-02-01

    The present study was aimed at comparing the number of arbuscular mycorrhizal fungi (AMF) propagules found in soil from a mature tropical forest and that found in an abandoned cornfield in Noh-Bec Quintana Roo, Mexico, during three seasons. Agricultural practices can dramatically reduce the availability and viability of AMF propagules, and in this way delay the regeneration of tropical forests in abandoned agricultural areas. In addition, rainfall seasonality, which characterizes deciduous tropical forests, may strongly influence AMF propagules density. To compare AMF propagule numbers between sites and seasons (summer rainy, winter rainy and dry season), a "most probable number" (MPN) bioassay was conducted under greenhouse conditions employing Sorgum vulgare L. as host plant. Results showed an average value of 3.5 ± 0.41 propagules in 50 ml of soil for the mature forest while the abandoned cornfield had 15.4 ± 5.03 propagules in 50 ml of soil. Likelihood analysis showed no statistical differences in MPN of propagules between seasons within each site, or between sites, except for the summer rainy season for which soil from the abandoned cornfield had eight times as many propagules compared to soil from the mature forest site for this season. Propagules of arbuscular mycorrhizal fungi remained viable throughout the sampling seasons at both sites. Abandoned areas resulting from traditional slash and burn agriculture practices involving maize did not show a lower number of AMF propagules, which should allow the establishment of mycotrophic plants thus maintaining the AMF inoculum potential in these soils. PMID:20714755

  3. Optomechanical considerations for realistic tolerancing

    NASA Astrophysics Data System (ADS)

    Herman, Eric; Sasián, José; Youngworth, Richard N.

    2013-09-01

    Optical tolerancing simulation has improved so that the modeling of optomechanical accuracy can better predict as-built performance. A key refinement being proposed within this paper is monitoring formal interference fits and checking lens elements within their mechanical housings. Without proper checks, simulations may become physically unrealizable and pessimistic, thereby resulting in lower simulated yields. An improved simulation method has been defined and demonstrated in this paper with systems that do not have barrel constraints. The demonstration cases clearly show the trend of the beneficial impact with yield results, as a yield increase of 36.3% to 39.2% is garnered by one example. Considerations in simulating the realistic optomechanical system will assist in controlling cost and providing more accurate simulation results.

  4. Realistic Solar Surface Convection Simulations

    NASA Technical Reports Server (NTRS)

    Stein, Robert F.; Nordlund, Ake

    2000-01-01

    We perform essentially parameter free simulations with realistic physics of convection near the solar surface. We summarize the physics that is included and compare the simulation results with observations. Excellent agreement is obtained for the depth of the convection zone, the p-mode frequencies, the p-mode excitation rate, the distribution of the emergent continuum intensity, and the profiles of weak photospheric lines. We describe how solar convection is nonlocal. It is driven from a thin surface thermal boundary layer where radiative cooling produces low entropy gas which forms the cores of the downdrafts in which most of the buoyancy work occurs. We show that turbulence and vorticity are mostly confined to the intergranular lanes and underlying downdrafts. Finally, we illustrate our current work on magneto-convection.

  5. Using hierarchical Bayesian multi-species mixture models to estimate tandem hoop-net based habitat associations and detection probabilities of fishes in reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2015-01-01

    Species distribution models are useful tools to evaluate habitat relationships of fishes. We used hierarchical Bayesian multispecies mixture models to evaluate the relationships of both detection and abundance with habitat of reservoir fishes caught using tandem hoop nets. A total of 7,212 fish from 12 species were captured, and the majority of the catch was composed of Channel Catfish Ictalurus punctatus (46%), Bluegill Lepomis macrochirus(25%), and White Crappie Pomoxis annularis (14%). Detection estimates ranged from 8% to 69%, and modeling results suggested that fishes were primarily influenced by reservoir size and context, water clarity and temperature, and land-use types. Species were differentially abundant within and among habitat types, and some fishes were found to be more abundant in turbid, less impacted (e.g., by urbanization and agriculture) reservoirs with longer shoreline lengths; whereas, other species were found more often in clear, nutrient-rich impoundments that had generally shorter shoreline length and were surrounded by a higher percentage of agricultural land. Our results demonstrated that habitat and reservoir characteristics may differentially benefit species and assemblage structure. This study provides a useful framework for evaluating capture efficiency for not only hoop nets but other gear types used to sample fishes in reservoirs.

  6. Assessing the influence of topographical data and of the calibration procedure on the estimation of flood inundation probabilities and associated uncertainty

    NASA Astrophysics Data System (ADS)

    Fabio, Pamela; Candela, Angela; Aronica, Giuseppe T.

    2010-05-01

    Floods are considered the most frequent natural disaster world-wide and may have serious socio economic impacts in a community. In order to accomplish flood risk mitigation, flood risk analysis and assessment is required to provide information on current or future flood hazard and risks. Hazard and risk maps involve different data, expertise and effort, depending also to the end-users. In general praxis, more or less advanced deterministic approaches are usually used, but probabilistic approaches seem to be more correct and suited for modelling flood inundation. Two very important matters remain still opened for research: the calibration of hydraulic model (oriented towards the estimation of effective roughness parameters) and the uncertainties (e.g. related to data, model structure and parameterisation) affecting flood hazard mapping results. Here, the new way to incorporate uncertainty in flood hazard will be applied using more accurate topographical data and a new mesh for a complex two-dimensional hyperbolic finite element model. Through a comparison among resulting hazard maps, the influence of these kinds of data will be shown. Moreover, in order to show the influence of the calibration procedure to the final hazard maps, a further comparison will be effected. The calibration of the 2D hydraulic model will be carried out by combining more than one type of observational data. To date, limited applications still exist chiefly because data sets for historical events are quite rare. The procedures were tested on a flood prone area located in the southern part of Sicily, Italy.

  7. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  8. Realist RCTs of complex interventions - an oxymoron.

    PubMed

    Marchal, Bruno; Westhorp, Gill; Wong, Geoff; Van Belle, Sara; Greenhalgh, Trisha; Kegels, Guy; Pawson, Ray

    2013-10-01

    Bonell et al. discuss the challenges of carrying out randomised controlled trials (RCTs) to evaluate complex interventions in public health, and consider the role of realist evaluation in enhancing this design (Bonell, Fletcher, Morton, Lorenc, & Moore, 2012). They argue for a "synergistic, rather than oppositional relationship between realist and randomised evaluation" and that "it is possible to benefit from the insights provided by realist evaluation without relinquishing the RCT as the best means of examining intervention causality." We present counter-arguments to their analysis of realist evaluation and their recommendations for realist RCTs. Bonell et al. are right to question whether and how (quasi-)experimental designs can be improved to better evaluate complex public health interventions. However, the paper does not explain how a research design that is fundamentally built upon a positivist ontological and epistemological position can be meaningfully adapted to allow it to be used from within a realist paradigm. The recommendations for "realist RCTs" do not sufficiently take into account important elements of complexity that pose major challenges for the RCT design. They also ignore key tenets of the realist evaluation approach. We propose that the adjective 'realist' should continue to be used only for studies based on a realist philosophy and whose analytic approach follows the established principles of realist analysis. It seems more correct to call the approach proposed by Bonell and colleagues 'theory informed RCT', which indeed can help in enhancing RCTs. PMID:23850482

  9. Flood hazard probability mapping method

    NASA Astrophysics Data System (ADS)

    Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart

    2015-04-01

    In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.

  10. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  11. Realistic texture in simulated thermal infrared imagery

    NASA Astrophysics Data System (ADS)

    Ward, Jason T.

    Creating a visually-realistic yet radiometrically-accurate simulation of thermal infrared (TIR) imagery is a challenge that has plagued members of industry and academia alike. The goal of imagery simulation is to provide a practical alternative to the often staggering effort required to collect actual data. Previous attempts at simulating TIR imagery have suffered from a lack of texture---the simulated scenes generally failed to reproduce the natural variability seen in actual TIR images. Realistic synthetic TIR imagery requires modeling sources of variability including surface effects such as solar insolation and convective heat exchange as well as sub-surface effects such as density and water content. This research effort utilized the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, developed at the Rochester Institute of Technology, to investigate how these additional sources of variability could be modeled to correctly and accurately provide simulated TIR imagery. Actual thermal data were collected, analyzed, and exploited to determine the underlying thermodynamic phenomena and ascertain how these phenomena are best modeled. The underlying task was to determine how to apply texture in the thermal region to attain radiometrically-correct, visually-appealing simulated imagery. Three natural desert scenes were used to test the methodologies that were developed for estimating per-pixel thermal parameters which could then be used for TIR image simulation by DIRSIG. Additional metrics were devised and applied to the synthetic images to further quantify the success of this research. The resulting imagery demonstrated that these new methodologies for modeling TIR phenomena and the utilization of an improved DIRSIG tool improved the root mean-squared error (RMSE) of our synthetic TIR imagery by up to 88%.

  12. Failure probability of PWR reactor coolant loop piping. [Double-ended guillotine break

    SciTech Connect

    Lo, T.; Woo, H.H.; Holman, G.S.; Chou, C.K.

    1984-02-01

    This paper describes the results of assessments performed on the PWR coolant loop piping of Westinghouse and Combustion Engineering plants. For direct double-ended guillotine break (DEGB), consideration was given to crack existence probability, initial crack size distribution, hydrostatic proof test, preservice inspection, leak detection probability, crack growth characteristics, and failure criteria based on the net section stress failure and tearing modulus stability concept. For indirect DEGB, fragilities of major component supports were estimated. The system level fragility was then calculated based on the Boolean expression involving these fragilities. Indirect DEGB due to seismic effects was calculated by convolving the system level fragility and the seismic hazard curve. The results indicate that the probability of occurrence of both direct and indirect DEGB is extremely small, thus, postulation of DEGB in design should be eliminated and replaced by more realistic criteria.

  13. Probability distributions for quantum stress tensors measured in a finite time interval

    NASA Astrophysics Data System (ADS)

    Fewster, Christopher J.; Ford, L. H.

    2015-11-01

    A meaningful probability distribution for measurements of a quantum stress tensor operator can only be obtained if the operator is averaged in time or in spacetime. This averaging can be regarded as a description of the measurement process. Realistic measurements can be expected to begin and end at finite times, which means that they are described by functions with compact support, which we will also take to be smooth. Here we study the probability distributions for stress tensor operators averaged with such functions of time, in the vacuum state of a massless free field. Our primary aim is to understand the asymptotic form of the distribution which describes the probability of large vacuum fluctuations. Our approach involves asymptotic estimates for the high moments of the distribution. These estimates in turn may be used to obtain estimates for the asymptotic form of the probability distribution. Our results show that averaging over a finite interval results in a probability distribution which falls more slowly than for the case of Lorentzian averaging, and both fall more slowly than exponentially. This indicates that vacuum fluctuations effects can dominate over thermal fluctuations in some circumstances.

  14. On Probability Domains III

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2015-12-01

    Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.

  15. Estimation of return periods for extreme sea levels: a simplified empirical correction of the joint probabilities method with examples from the French Atlantic coast and three ports in the southwest of the UK

    NASA Astrophysics Data System (ADS)

    Pirazzoli, Paolo Antonio; Tomasin, Alberto

    2007-04-01

    The joint probability method (JPM) to estimate the probability of extreme sea levels (Pugh and Vassie, Extreme sea-levels from tide and surge probability. Proc. 16th Coastal Engineering Conference, 1978, Hamburg, American Society of Civil Engineers, New York, pp 911 930, 1979) has been applied to the hourly records of 13 tide-gauge stations of the tidally dominated Atlantic coast of France (including Brest, since 1860) and to three stations in the southwest of the UK (including Newlyn, since 1916). The cumulative total length of the available records (more than 426 years) is variable from 1 to 130 years when individual stations are considered. It appears that heights estimated with the JPM are almost systematically greater than the extreme heights recorded. Statistical analysis shows that this could be due: (1) to surge tide interaction (that may tend to damp surge values that occur at the time of the highest tide levels), and (2) to the fact that major surges often occur in seasonal periods that may not correspond to those of extreme astronomical tides. We have determined at each station empirical ad hoc correction coefficients that take into account the above two factors separately, or together, and estimated return periods for extreme water levels also at stations where only short records are available. For seven long records, for which estimations with other computing methods (e.g. generalized extreme value [GEV] distribution and Gumbel) can be attempted, average estimations of extreme values appear slightly overestimated in relation to the actual maximum records by the uncorrected JPM (+16.7 ± 7.2 cm), and by the Gumbel method alone (+10.3 ± 6.3 cm), but appear closer to the reality with the GEV distribution (-2.0 ± 5.3 cm) and with the best-fitting correction to the JPM (+2.9 ± 4.4 cm). Because the GEV analysis can hardly be extended to short records, it is proposed to apply at each station, especially for short records, the JPM and the site-dependent ad

  16. Realistic Detectability of Close Interstellar Comets

    NASA Astrophysics Data System (ADS)

    Cook, Nathaniel V.; Ragozzine, Darin; Granvik, Mikael; Stephens, Denise C.

    2016-07-01

    During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (ICs) (also known as extra-solar planetesimals or extra-solar comets) would give us in situ information about the efficiency and properties of planet formation throughout the galaxy. However, no ICs have ever been detected, despite the fact that their hyperbolic orbits would make them readily identifiable as unrelated to the solar system. Moro-Martín et al. have made a detailed and reasonable estimate of the properties of the IC population. We extend their estimates of detectability with a numerical model that allows us to consider “close” ICs, e.g., those that come within the orbit of Jupiter. We include several constraints on a “detectable” object that allow for realistic estimates of the frequency of detections expected from the Large Synoptic Survey Telescope (LSST) and other surveys. The influence of several of the assumed model parameters on the frequency of detections is explored in detail. Based on the expectation from Moro-Martín et al., we expect that LSST will detect 0.001–10 ICs during its nominal 10 year lifetime, with most of the uncertainty from the unknown number density of small (nuclei of ∼0.1–1 km) ICs. Both asteroid and comet cases are considered, where the latter includes various empirical prescriptions of brightening. Using simulated LSST-like astrometric data, we study the problem of orbit determination for these bodies, finding that LSST could identify their orbits as hyperbolic and determine an ephemeris sufficiently accurate for follow-up in about 4–7 days. We give the hyperbolic orbital parameters of the most detectable ICs. Taking the results into consideration, we give recommendations to future searches for ICs.

  17. Realistic Detectability of Close Interstellar Comets

    NASA Astrophysics Data System (ADS)

    Cook, Nathaniel V.; Ragozzine, Darin; Granvik, Mikael; Stephens, Denise C.

    2016-07-01

    During the planet formation process, billions of comets are created and ejected into interstellar space. The detection and characterization of such interstellar comets (ICs) (also known as extra-solar planetesimals or extra-solar comets) would give us in situ information about the efficiency and properties of planet formation throughout the galaxy. However, no ICs have ever been detected, despite the fact that their hyperbolic orbits would make them readily identifiable as unrelated to the solar system. Moro-Martín et al. have made a detailed and reasonable estimate of the properties of the IC population. We extend their estimates of detectability with a numerical model that allows us to consider “close” ICs, e.g., those that come within the orbit of Jupiter. We include several constraints on a “detectable” object that allow for realistic estimates of the frequency of detections expected from the Large Synoptic Survey Telescope (LSST) and other surveys. The influence of several of the assumed model parameters on the frequency of detections is explored in detail. Based on the expectation from Moro-Martín et al., we expect that LSST will detect 0.001–10 ICs during its nominal 10 year lifetime, with most of the uncertainty from the unknown number density of small (nuclei of ˜0.1–1 km) ICs. Both asteroid and comet cases are considered, where the latter includes various empirical prescriptions of brightening. Using simulated LSST-like astrometric data, we study the problem of orbit determination for these bodies, finding that LSST could identify their orbits as hyperbolic and determine an ephemeris sufficiently accurate for follow-up in about 4–7 days. We give the hyperbolic orbital parameters of the most detectable ICs. Taking the results into consideration, we give recommendations to future searches for ICs.

  18. Gravity waves in a realistic atmosphere.

    NASA Technical Reports Server (NTRS)

    Liemohn, H. B.; Midgley, J. E.

    1966-01-01

    Internal atmospheric gravity waves in isothermal medium, solving hydrodynamic equations, determining wave propagation in realistic atmosphere for range of wave parameters, wind amplitude, reflected energy, etc

  19. Realistic texture extraction for 3D face models robust to self-occlusion

    NASA Astrophysics Data System (ADS)

    Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

    2015-02-01

    In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

  20. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  1. Probability and Relative Frequency

    NASA Astrophysics Data System (ADS)

    Drieschner, Michael

    2016-01-01

    The concept of probability seems to have been inexplicable since its invention in the seventeenth century. In its use in science, probability is closely related with relative frequency. So the task seems to be interpreting that relation. In this paper, we start with predicted relative frequency and show that its structure is the same as that of probability. I propose to call that the `prediction interpretation' of probability. The consequences of that definition are discussed. The "ladder"-structure of the probability calculus is analyzed. The expectation of the relative frequency is shown to be equal to the predicted relative frequency. Probability is shown to be the most general empirically testable prediction.

  2. Evaluating species richness: biased ecological inference results from spatial heterogeneity in species detection probabilities

    USGS Publications Warehouse

    McNew, Lance B.; Handel, Colleen M.

    2015-01-01

    Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results

  3. Realistic costs of carbon capture

    SciTech Connect

    Al Juaied, Mohammed . Belfer Center for Science and International Affiaris); Whitmore, Adam )

    2009-07-01

    There is a growing interest in carbon capture and storage (CCS) as a means of reducing carbon dioxide (CO2) emissions. However there are substantial uncertainties about the costs of CCS. Costs for pre-combustion capture with compression (i.e. excluding costs of transport and storage and any revenue from EOR associated with storage) are examined in this discussion paper for First-of-a-Kind (FOAK) plant and for more mature technologies, or Nth-of-a-Kind plant (NOAK). For FOAK plant using solid fuels the levelised cost of electricity on a 2008 basis is approximately 10 cents/kWh higher with capture than for conventional plants (with a range of 8-12 cents/kWh). Costs of abatement are found typically to be approximately US$150/tCO2 avoided (with a range of US$120-180/tCO2 avoided). For NOAK plants the additional cost of electricity with capture is approximately 2-5 cents/kWh, with costs of the range of US$35-70/tCO2 avoided. Costs of abatement with carbon capture for other fuels and technologies are also estimated for NOAK plants. The costs of abatement are calculated with reference to conventional SCPC plant for both emissions and costs of electricity. Estimates for both FOAK and NOAK are mainly based on cost data from 2008, which was at the end of a period of sustained escalation in the costs of power generation plant and other large capital projects. There are now indications of costs falling from these levels. This may reduce the costs of abatement and costs presented here may be 'peak of the market' estimates. If general cost levels return, for example, to those prevailing in 2005 to 2006 (by which time significant cost escalation had already occurred from previous levels), then costs of capture and compression for FOAK plants are expected to be US$110/tCO2 avoided (with a range of US$90-135/tCO2 avoided). For NOAK plants costs are expected to be US$25-50/tCO2. Based on these considerations a likely representative range of costs of abatement from CCS excluding

  4. Evolution and Probability.

    ERIC Educational Resources Information Center

    Bailey, David H.

    2000-01-01

    Some of the most impressive-sounding criticisms of the conventional theory of biological evolution involve probability. Presents a few examples of how probability should and should not be used in discussing evolution. (ASK)

  5. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  6. Probability on a Budget.

    ERIC Educational Resources Information Center

    Ewbank, William A.; Ginther, John L.

    2002-01-01

    Describes how to use common dice numbered 1-6 for simple mathematical situations including probability. Presents a lesson using regular dice and specially marked dice to explore some of the concepts of probability. (KHR)

  7. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  8. Searching with probabilities

    SciTech Connect

    Palay, A.J.

    1985-01-01

    This book examines how probability distributions can be used as a knowledge representation technique. It presents a mechanism that can be used to guide a selective search algorithm to solve a variety of tactical chess problems. Topics covered include probabilities and searching the B algorithm and chess probabilities - in practice, examples, results, and future work.

  9. Landslide Probability Assessment by the Derived Distributions Technique

    NASA Astrophysics Data System (ADS)

    Muñoz, E.; Ochoa, A.; Martínez, H.

    2012-12-01

    Landslides are potentially disastrous events that bring along human and economic losses; especially in cities where an accelerated and unorganized growth leads to settlements on steep and potentially unstable areas. Among the main causes of landslides are geological, geomorphological, geotechnical, climatological, hydrological conditions and anthropic intervention. This paper studies landslides detonated by rain, commonly known as "soil-slip", which characterize by having a superficial failure surface (Typically between 1 and 1.5 m deep) parallel to the slope face and being triggered by intense and/or sustained periods of rain. This type of landslides is caused by changes on the pore pressure produced by a decrease in the suction when a humid front enters, as a consequence of the infiltration initiated by rain and ruled by the hydraulic characteristics of the soil. Failure occurs when this front reaches a critical depth and the shear strength of the soil in not enough to guarantee the stability of the mass. Critical rainfall thresholds in combination with a slope stability model are widely used for assessing landslide probability. In this paper we present a model for the estimation of the occurrence of landslides based on the derived distributions technique. Since the works of Eagleson in the 1970s the derived distributions technique has been widely used in hydrology to estimate the probability of occurrence of extreme flows. The model estimates the probability density function (pdf) of the Factor of Safety (FOS) from the statistical behavior of the rainfall process and some slope parameters. The stochastic character of the rainfall is transformed by means of a deterministic failure model into FOS pdf. Exceedance probability and return period estimation is then straightforward. The rainfall process is modeled as a Rectangular Pulses Poisson Process (RPPP) with independent exponential pdf for mean intensity and duration of the storms. The Philip infiltration model

  10. In All Probability, Probability is not All

    ERIC Educational Resources Information Center

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  11. Probability of pipe failure in the reactor coolant loops of Combustion Engineering PWR plants. Volume 1. Summary report

    SciTech Connect

    Holman, G.S.; Lo, T.; Chou, C.K.

    1985-01-01

    As part of its reevaluation of the double-ended guillotine break (DEGB) as a design requirement for reactor coolant piping, the US Nuclear Regulatory Commission (NRC) contracted with the Lawrence Livermore National Laboratory (LLNL) to estimate the probability of occurrence of a DEGB, and to assess the effect that earthquakes have on DEGB probability. This report describes a probabilistic evaluation of reactor coolant loop piping in PWR plants having nuclear steam supply systems designed by Combustion Engineering. Two causes of pipe break were considered: pipe fracture due to the growth of cracks at welded joints (direct DEGB), and pipe rupture indirectly caused by failure of component supports due to an earthquake (indirect DEGB). The probability of direct DEGB was estimated using a probabilistic fracture mechanics model. The probability of indirect DEGB was estimated by estimating support fragility and then convolving fragility with seismic hazard. The results of this study indicate that the probability of a DEGB from either cause is very low for reactor coolant loop piping in these plants, and that NRC should therefore consider eliminating DEGB as a design basis in favor of more realistic criteria.

  12. Probability of pipe failure in the reactor coolant loops of Westinghouse PWR Plants. Volume 1. Summary report

    SciTech Connect

    Holman, G.S.; Chou, C.K.

    1985-07-01

    As part of its reevaluation of the double-ended guillotine break (DEGB) of reactor coolant loop piping as a design basis event for nuclear power plants, the US Nuclear Regulatory Commission (NRC) contracted with the Lawrence Livermore National Laboratory (LLNL) to estimate the probability of occurrence probability. This report describes a probabilistic evaluation of reactor coolant loop piping in PWR plants having nuclear steam supply systems designed by Westinghouse. Two causes of pipe break were considered: pipe fracture due to the growth of cracks at welded joints (''direct'' DEGB), and pipe rupture indirectly caused by failure of component supports due to an earthquake (''indirect'' DEGB). The probability of direct DEGB was estimated using a probabilistic fracture mechanics model. The probability of indirect DEGB was estimated by estimating support fragility and then convolving fragility and seismic hazard. The results of this study indicate that the probability of a DEGB from either cause is very low for reactor coolant loop piping in these plants, and that NRC should therefore consider eliminating DEGB as a design basis event in favor of more realistic criteria. 17 refs., 15 figs., 11 tabs.

  13. Transdimensional Bayesian Joint Inversion of Complementary Seismic Observables with Realistic Data Uncertainties

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2014-12-01

    Due to their different and complementary sensitivities to structure, multiple seismic observables are often combined to image the Earth's deep interior. We use a reversible jump Markov chain Monte Carlo (rjMCMC) algorithm to incorporate surface wave dispersion, particle motion ellipticity (HZ ratio), and receiver functions into transdimensional, Bayesian inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), and density beneath a seismic station. While traditional inversion approaches seek a single best-fit model, a Bayesian approach yields an ensemble of models, allowing us to fully quantify uncertainty and trade-offs between model parameters. Furthermore, we show that by treating the number model parameters as an unknown to be estimated from the data, we both eliminate the need for a fixed parameterization based on prior information, and obtain better model estimates with reduced trade-offs. Optimal weighting of disparate datasets is paramount for maximizing the resolving power of joint inversions. In a Bayesian framework, data uncertainty directly determines the variance of the model posterior probability distribution; therefore, characteristics of the uncertainties on the observables become even more important in the inversion (Bodin et al., 2011). To properly account for the noise characteristics of the different seismic observables, we compute covariance matrices of data errors for each data type by generating realistic synthetic noise using noise covariance matrices computed from thousands of noise samples, and then measuring the seismic observables of interest from synthetic waveforms contaminated by many different realizations of noise. We find large non-diagonal terms in the covariance matrices for different data types, indicating that typical assumptions of uncorrelated data errors are unjustified. We quantify how the use of realistic data covariance matrices in the joint inversion affects the retrieval of seismic structure under

  14. Relative transition probabilities of cobalt

    NASA Technical Reports Server (NTRS)

    Roig, R. A.; Miller, M. H.

    1974-01-01

    Results of determinations of neutral-cobalt transition probabilities measured relative to Co I 4150.43 A and Co II 4145.15 A, using a gas-driven shock tube as the spectroscopic light source. Results are presented for 139 Co I lines in the range from 3940 to 6640 A and 11 Co II lines in the range from 3840 to 4730 A, which are estimated to have reliabilities ranging from 8 to 50%.

  15. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  16. Factors influencing reporting and harvest probabilities in North American geese

    USGS Publications Warehouse

    Zimmerman, G.S.; Moser, T.J.; Kendall, W.L.; Doherty, P.F., Jr.; White, Gary C.; Caswell, D.F.

    2009-01-01

    We assessed variation in reporting probabilities of standard bands among species, populations, harvest locations, and size classes of North American geese to enable estimation of unbiased harvest probabilities. We included reward (US10,20,30,50, or100) and control (0) banded geese from 16 recognized goose populations of 4 species: Canada (Branta canadensis), cackling (B. hutchinsii), Ross's (Chen rossii), and snow geese (C. caerulescens). We incorporated spatially explicit direct recoveries and live recaptures into a multinomial model to estimate reporting, harvest, and band-retention probabilities. We compared various models for estimating harvest probabilities at country (United States vs. Canada), flyway (5 administrative regions), and harvest area (i.e., flyways divided into northern and southern sections) scales. Mean reporting probability of standard bands was 0.73 (95 CI 0.690.77). Point estimates of reporting probabilities for goose populations or spatial units varied from 0.52 to 0.93, but confidence intervals for individual estimates overlapped and model selection indicated that models with species, population, or spatial effects were less parsimonious than those without these effects. Our estimates were similar to recently reported estimates for mallards (Anas platyrhynchos). We provide current harvest probability estimates for these populations using our direct measures of reporting probability, improving the accuracy of previous estimates obtained from recovery probabilities alone. Goose managers and researchers throughout North America can use our reporting probabilities to correct recovery probabilities estimated from standard banding operations for deriving spatially explicit harvest probabilities.

  17. Seismicity alert probabilities at Parkfield, California, revisited

    USGS Publications Warehouse

    Michael, A.J.; Jones, L.M.

    1998-01-01

    For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.

  18. Single-case probabilities

    NASA Astrophysics Data System (ADS)

    Miller, David

    1991-12-01

    The propensity interpretation of probability, bred by Popper in 1957 (K. R. Popper, in Observation and Interpretation in the Philosophy of Physics, S. Körner, ed. (Butterworth, London, 1957, and Dover, New York, 1962), p. 65; reprinted in Popper Selections, D. W. Miller, ed. (Princeton University Press, Princeton, 1985), p. 199) from pure frequency stock, is the only extant objectivist account that provides any proper understanding of single-case probabilities as well as of probabilities in ensembles and in the long run. In Sec. 1 of this paper I recall salient points of the frequency interpretations of von Mises and of Popper himself, and in Sec. 2 I filter out from Popper's numerous expositions of the propensity interpretation its most interesting and fertile strain. I then go on to assess it. First I defend it, in Sec. 3, against recent criticisms (P. Humphreys, Philos. Rev. 94, 557 (1985); P. Milne, Erkenntnis 25, 129 (1986)) to the effect that conditional [or relative] probabilities, unlike absolute probabilities, can only rarely be made sense of as propensities. I then challenge its predominance, in Sec. 4, by outlining a rival theory: an irreproachably objectivist theory of probability, fully applicable to the single case, that interprets physical probabilities as instantaneous frequencies.

  19. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained. PMID:22558834

  20. Dynamic SEP event probability forecasts

    NASA Astrophysics Data System (ADS)

    Kahler, S. W.; Ling, A.

    2015-10-01

    The forecasting of solar energetic particle (SEP) event probabilities at Earth has been based primarily on the estimates of magnetic free energy in active regions and on the observations of peak fluxes and fluences of large (≥ M2) solar X-ray flares. These forecasts are typically issued for the next 24 h or with no definite expiration time, which can be deficient for time-critical operations when no SEP event appears following a large X-ray flare. It is therefore important to decrease the event probability forecast with time as a SEP event fails to appear. We use the NOAA listing of major (≥10 pfu) SEP events from 1976 to 2014 to plot the delay times from X-ray peaks to SEP threshold onsets as a function of solar source longitude. An algorithm is derived to decrease the SEP event probabilities with time when no event is observed to reach the 10 pfu threshold. In addition, we use known SEP event size distributions to modify probability forecasts when SEP intensity increases occur below the 10 pfu event threshold. An algorithm to provide a dynamic SEP event forecast, Pd, for both situations of SEP intensities following a large flare is derived.

  1. Research on ultra-realistic communications

    NASA Astrophysics Data System (ADS)

    Enami, Kazumasa

    2009-05-01

    A future communication method enabled by information communications technology- ultra-realistic communication - is now being investigated in Japan and research and development of the various technologies required for its realization is being conducted, such as ultra-high definition TV, 3DTV, super surround sound reproduction and multi-sensory communication including touch and smell. An organization called the Ultra-Realistic Communications Forum (URCF) was also established for the effective promotion of R&D and the standardization of relating technologies. This document explains the activities of the URCF by industry, academia and government, and introduces researches on ultra-realistic communications in the National Institute of Information and Communications Technology (NICT).

  2. Probability with Roulette

    ERIC Educational Resources Information Center

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  3. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  4. Keeping It Real: How Realistic Does Realistic Fiction for Children Need to Be?

    ERIC Educational Resources Information Center

    O'Connor, Barbara

    2010-01-01

    O'Connor, an author of realistic fiction for children, shares her attempts to strike a balance between carefree, uncensored, authentic, realistic writing and age-appropriate writing. Of course, complicating that balancing act is the fact that what seems age-appropriate to her might not seem so to everyone. O'Connor suggests that while it may be…

  5. Realistic Hot Water Draw Specification for Rating Solar Water Heaters: Preprint

    SciTech Connect

    Burch, J.

    2012-06-01

    In the United States, annual performance ratings for solar water heaters are simulated, using TMY weather and specified water draw. A more-realistic ratings draw is proposed that eliminates most bias by improving mains inlet temperature and by specifying realistic hot water use. This paper outlines the current and the proposed draws and estimates typical ratings changes from draw specification changes for typical systems in four cities.

  6. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  7. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  8. Making a Literature Methods Course "Realistic."

    ERIC Educational Resources Information Center

    Lewis, William J.

    Recognizing that it can be a challenge to make an undergraduate literature methods course realistic, a methods instructor at a Michigan university has developed three major and several minor activities that have proven effective in preparing pre-student teachers for the "real world" of teaching and, at the same time, have been challenging and…

  9. Satellite Maps Deliver More Realistic Gaming

    NASA Technical Reports Server (NTRS)

    2013-01-01

    When Redwood City, California-based Electronic Arts (EA) decided to make SSX, its latest snowboarding video game, it faced challenges in creating realistic-looking mountains. The solution was NASA's ASTER Global Digital Elevation Map, made available by the Jet Propulsion Laboratory, which EA used to create 28 real-life mountains from 9 different ranges for its award-winning game.

  10. Improving Intuition Skills with Realistic Mathematics Education

    ERIC Educational Resources Information Center

    Hirza, Bonita; Kusumah, Yaya S.; Darhim; Zulkardi

    2014-01-01

    The intention of the present study was to see the improvement of students' intuitive skills. This improvement was seen by comparing the Realistic Mathematics Education (RME)-based instruction with the conventional mathematics instruction. The subject of this study was 164 fifth graders of elementary school in Palembang. The design of this study…

  11. Project REALISTIC: Determining Literacy Demands of Jobs.

    ERIC Educational Resources Information Center

    Sticht, Thomas C.; Kern, Richard P.

    1971-01-01

    REALISTIC is an acronym based upon the three literacy skills areas studied--REAding, LIStening, and ArithmeTIC. The general objectives of the project are: (1) to provide information concerning the demands for reading, listening, and arithmetic skills in several major military occupational specialties (MOSS), and (2) to provide information and…

  12. Realistic Portrayal of Aging. An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Dodson, Anita E.; Hause, Judith B.

    This annotated bibliography cites selected reading materials for all age levels that present aging and the aged realistically with a full range of human behaviors. The listing is meant to serve as a resource to educators who wish to develop positive attitudes in children and in adolescents about the elderly and about themselves. Educators should…

  13. Faculty Development for Educators: A Realist Evaluation

    ERIC Educational Resources Information Center

    Sorinola, Olanrewaju O.; Thistlethwaite, Jill; Davies, David; Peile, Ed

    2015-01-01

    The effectiveness of faculty development (FD) activities for educators in UK medical schools remains underexplored. This study used a realist approach to evaluate FD and to test the hypothesis that motivation, engagement and perception are key mechanisms of effective FD activities. The authors observed and interviewed 33 course participants at one…

  14. Model of lifetimes of the outer radiation belt electrons in a realistic magnetic field using realistic chorus wave parameters

    NASA Astrophysics Data System (ADS)

    Orlova, Ksenia; Shprits, Yuri

    2014-02-01

    The outer radiation belt electrons in the inner magnetosphere show high variability during the geomagnetically disturbed conditions. Quasi-linear diffusion theory provides both a framework for global prediction of particle loss at different energies and an understanding of the dynamics of different particle populations. It has been recently shown that the pitch angle scattering of electrons due to wave-particle interaction with chorus waves modeled in a realistic magnetic field may be significantly different from those estimated in a dipole model. In this work, we present the lifetimes of 1 keV-2 MeV electrons computed in the Tsyganenko 89 magnetic field model for the night, dawn, prenoon, and postnoon magnetic local time (MLT) sectors for different levels of geomagnetic activity and distances. The lifetimes in the realistic field are also compared to those computed in the dipole model. We develop a realistic chorus lower band and upper band wave models for each MLT sector using the recent statistical studies of wave amplitude, wave normal angle, and wave spectral density distributions as functions of magnetic latitude, distance, and Kp index. The increase of plasma trough density with increasing latitude is also included. The obtained in the Tsyganenko 89 field electron lifetimes are parameterized and can be used in 2-D/3-D/4-D convection and particle tracing codes.

  15. Identifying novel biomarkers through data mining—A realistic scenario?

    PubMed Central

    Perez‐Riverol, Yasset; Hermjakob, Henning

    2015-01-01

    In this article we discuss the requirements to use data mining of published proteomics datasets to assist proteomics‐based biomarker discovery, the use of external data integration to solve the issue of inadequate small sample sizes and finally, we try to estimate the probability that new biomarkers will be identified through data mining alone. PMID:25347964

  16. Acceptance, values, and probability.

    PubMed

    Steel, Daniel

    2015-10-01

    This essay makes a case for regarding personal probabilities used in Bayesian analyses of confirmation as objects of acceptance and rejection. That in turn entails that personal probabilities are subject to the argument from inductive risk, which aims to show non-epistemic values can legitimately influence scientific decisions about which hypotheses to accept. In a Bayesian context, the argument from inductive risk suggests that value judgments can influence decisions about which probability models to accept for likelihoods and priors. As a consequence, if the argument from inductive risk is sound, then non-epistemic values can affect not only the level of evidence deemed necessary to accept a hypothesis but also degrees of confirmation themselves. PMID:26386533

  17. Tool for Generating Realistic Residential Hot Water Event Schedules: Preprint

    SciTech Connect

    Hendron, B.; Burch, J.; Barker, G.

    2010-08-01

    The installed energy savings for advanced residential hot water systems can depend greatly on detailed occupant use patterns. Quantifying these patterns is essential for analyzing measures such as tankless water heaters, solar hot water systems with demand-side heat exchangers, distribution system improvements, and recirculation loops. This paper describes the development of an advanced spreadsheet tool that can generate a series of year-long hot water event schedules consistent with realistic probability distributions of start time, duration and flow rate variability, clustering, fixture assignment, vacation periods, and seasonality. This paper also presents the application of the hot water event schedules in the context of an integral-collector-storage solar water heating system in a moderate climate.

  18. Evaluation of Two Methods to Estimate and Monitor Bird Populations

    PubMed Central

    Taylor, Sandra L.; Pollard, Katherine S.

    2008-01-01

    Background Effective management depends upon accurately estimating trends in abundance of bird populations over time, and in some cases estimating abundance. Two population estimation methods, double observer (DO) and double sampling (DS), have been advocated for avian population studies and the relative merits and short-comings of these methods remain an area of debate. Methodology/Principal Findings We used simulations to evaluate the performances of these two population estimation methods under a range of realistic scenarios. For three hypothetical populations with different levels of clustering, we generated DO and DS population size estimates for a range of detection probabilities and survey proportions. Population estimates for both methods were centered on the true population size for all levels of population clustering and survey proportions when detection probabilities were greater than 20%. The DO method underestimated the population at detection probabilities less than 30% whereas the DS method remained essentially unbiased. The coverage probability of 95% confidence intervals for population estimates was slightly less than the nominal level for the DS method but was substantially below the nominal level for the DO method at high detection probabilities. Differences in observer detection probabilities did not affect the accuracy and precision of population estimates of the DO method. Population estimates for the DS method remained unbiased as the proportion of units intensively surveyed changed, but the variance of the estimates decreased with increasing proportion intensively surveyed. Conclusions/Significance The DO and DS methods can be applied in many different settings and our evaluations provide important information on the performance of these two methods that can assist researchers in selecting the method most appropriate for their particular needs. PMID:18728775

  19. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  20. Varga: On Probability.

    ERIC Educational Resources Information Center

    Varga, Tamas

    This booklet resulted from a 1980 visit by the author, a Hungarian mathematics educator, to the Teachers' Center Project at Southern Illinois University at Edwardsville. Included are activities and problems that make probablility concepts accessible to young children. The topics considered are: two probability games; choosing two beads; matching…

  1. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  2. Considerations for realistic ECCS evaluation methodology for LWRs

    SciTech Connect

    Rohatgi, U.S.; Saha, P.; Chexal, V.K.

    1985-01-01

    This paper identifies the various phenomena which govern the course of large and small break LOCAs in LWRs, and affect the key parameters such as Peak Clad Temperature (PCT) and timing of the end of blowdown, beginning of reflood, PCT, and complete quench. A review of the best-estimate models and correlations for these phenomena in the current literature has been presented. Finally, a set of models have been recommended which may be incorporated in a present best-estimate code such as TRAC or RELAP5 in order to develop a realistic ECCS evaluation methodology for future LWRs and have also been compared with the requirements of current ECCS evaluation methodology as outlined in Appendix K of 10CFR50. 58 refs.

  3. Spectral tunability of realistic plasmonic nanoantennas

    SciTech Connect

    Portela, Alejandro; Matsui, Hiroaki; Tabata, Hitoshi; Yano, Takaaki; Hayashi, Tomohiro; Hara, Masahiko; Santschi, Christian; Martin, Olivier J. F.

    2014-09-01

    Single nanoantenna spectroscopy was carried out on realistic dipole nanoantennas with various arm lengths and gap sizes fabricated by electron-beam lithography. A significant difference in resonance wavelength between realistic and ideal nanoantennas was found by comparing their spectral response. Consequently, the spectral tunability (96 nm) of the structures was significantly lower than that of simulated ideal nanoantennas. These observations, attributed to the nanofabrication process, are related to imperfections in the geometry, added metal adhesion layer, and shape modifications, which are analyzed in this work. Our results provide important information for the design of dipole nanoantennas clarifying the role of the structural modifications on the resonance spectra, as supported by calculations.

  4. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  5. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms. PMID:26828313

  6. Probability of pipe failure in the reactor coolant loops of Babcock and Wilcox PWR plants. Volume 1. Summary report

    SciTech Connect

    Holman, G.S.; Chou, C.K.

    1986-05-01

    As part of its reevaluation of the double-ended guillotine break (DEGB) of reactor coolant piping as a design basis event for nuclear power plants, the US Nuclear Regulatory Commission (NRC) contracted the Lawrence Livermore National Laboratory (LLNL) to estimate the probability of occurrence of a DEGB, and to assess the effect that earthquakes have on DEGB probability. This report describes an evaluation of reactor coolant loop piping in PWR plants having nuclear steam supply systems designed by Babcock and Wilcox. Two causes of pipe break were considered: pipe fracture due to the growth of cracks at welded joints (''direct'' DEGB), and pipe rupture indirectly caused by failure of heavy component supports due to an earthquake (''indirect'' DEGB). Unlike in earlier evaluations of Westinghouse and Combustion Engineering reactor coolant loop piping, in which the probability of direct DEGB had been explicitly estimated using a probabilistic fracture mechanics model, no detailed fracture mechanics calculations were performed. Instead, a comparison of relevant plant data, mainly reactor coolant loop stresses, for one representative B and W plant with equivalent information for Westinghouse and C-E systems inferred that the probability of direct DEGB should be similarly low (less than le-10 per reactor year). The probability of indirect DEGB, on the other hand, was explicitly estimated for two representative plants. The results of this study indicate that the probability of a DEGB form either cause is very low for reactor coolant loop piping in these specific plants and, because of similarity in design, infer that the probability of DEGB is generally very low in B and W reactor coolant loop piping. The NRC should therefore consider eliminating DEGB as a design basis event in favor of more realistic criteria. 13 refs., 9 tabs.

  7. PLATO Simulator: Realistic simulations of expected observations

    NASA Astrophysics Data System (ADS)

    Marcos-Arenal, P.; Zima, W.; De Ridder, J.; Aerts, C.; Huygen, R.; Samadi, R.; Green, J.; Piotto, G.; Salmon, S.; Catala, C.; Rauer, H.

    2015-06-01

    PLATO Simulator is an end-to-end simulation software tool designed for the performance of realistic simulations of the expected observations of the PLATO mission but easily adaptable to similar types of missions. It models and simulates photometric time-series of CCD images by including models of the CCD and its electronics, the telescope optics, the stellar field, the jitter movements of the spacecraft, and all important natural noise sources.

  8. Dynamical Symmetries Reflected in Realistic Interactions

    SciTech Connect

    Sviratcheva, K.D.; Draayer, J.P.; Vary, J.P.; /Iowa State U. /LLNL, Livermore /SLAC

    2007-04-06

    Realistic nucleon-nucleon (NN) interactions, derived within the framework of meson theory or more recently in terms of chiral effective field theory, yield new possibilities for achieving a unified microscopic description of atomic nuclei. Based on spectral distribution methods, a comparison of these interactions to a most general Sp(4) dynamically symmetric interaction, which previously we found to reproduce well that part of the interaction that is responsible for shaping pairing-governed isobaric analog 0{sup +} states, can determine the extent to which this significantly simpler model Hamiltonian can be used to obtain an approximate, yet very good description of low-lying nuclear structure. And furthermore, one can apply this model in situations that would otherwise be prohibitive because of the size of the model space. In addition, we introduce a Sp(4) symmetry breaking term by including the quadrupole-quadrupole interaction in the analysis and examining the capacity of this extended model interaction to imitate realistic interactions. This provides a further step towards gaining a better understanding of the underlying foundation of realistic interactions and their ability to reproduce striking features of nuclei such as strong pairing correlations or collective rotational motion.

  9. The spline probability hypothesis density filter

    NASA Astrophysics Data System (ADS)

    Sithiravel, Rajiv; Tharmarasa, Ratnasingham; McDonald, Mike; Pelletier, Michel; Kirubarajan, Thiagalingam

    2012-06-01

    The Probability Hypothesis Density Filter (PHD) is a multitarget tracker for recursively estimating the number of targets and their state vectors from a set of observations. The PHD filter is capable of working well in scenarios with false alarms and missed detections. Two distinct PHD filter implementations are available in the literature: the Sequential Monte Carlo Probability Hypothesis Density (SMC-PHD) and the Gaussian Mixture Probability Hypothesis Density (GM-PHD) filters. The SMC-PHD filter uses particles to provide target state estimates, which can lead to a high computational load, whereas the GM-PHD filter does not use particles, but restricts to linear Gaussian mixture models. The SMC-PHD filter technique provides only weighted samples at discrete points in the state space instead of a continuous estimate of the probability density function of the system state and thus suffers from the well-known degeneracy problem. This paper proposes a B-Spline based Probability Hypothesis Density (S-PHD) filter, which has the capability to model any arbitrary probability density function. The resulting algorithm can handle linear, non-linear, Gaussian, and non-Gaussian models and the S-PHD filter can also provide continuous estimates of the probability density function of the system state. In addition, by moving the knots dynamically, the S-PHD filter ensures that the splines cover only the region where the probability of the system state is significant, hence the high efficiency of the S-PHD filter is maintained at all times. Also, unlike the SMC-PHD filter, the S-PHD filter is immune to the degeneracy problem due to its continuous nature. The S-PHD filter derivations and simulations are provided in this paper.

  10. Waste Package Misload Probability

    SciTech Connect

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  11. Probability for Weather and Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of

  12. Site occupancy models with heterogeneous detection probabilities

    USGS Publications Warehouse

    Royle, J. Andrew

    2006-01-01

    Models for estimating the probability of occurrence of a species in the presence of imperfect detection are important in many ecological disciplines. In these ?site occupancy? models, the possibility of heterogeneity in detection probabilities among sites must be considered because variation in abundance (and other factors) among sampled sites induces variation in detection probability (p). In this article, I develop occurrence probability models that allow for heterogeneous detection probabilities by considering several common classes of mixture distributions for p. For any mixing distribution, the likelihood has the general form of a zero-inflated binomial mixture for which inference based upon integrated likelihood is straightforward. A recent paper by Link (2003, Biometrics 59, 1123?1130) demonstrates that in closed population models used for estimating population size, different classes of mixture distributions are indistinguishable from data, yet can produce very different inferences about population size. I demonstrate that this problem can also arise in models for estimating site occupancy in the presence of heterogeneous detection probabilities. The implications of this are discussed in the context of an application to avian survey data and the development of animal monitoring programs.

  13. Flood frequency: expected and unexpected probabilities

    USGS Publications Warehouse

    Thomas, D.M.

    1976-01-01

    Flood-frequency curves may be defined either with or without an ' expeced probability ' adustment; and the two curves differ in the way that they attempt to average the time-sampling uncertainties. A curve with no adustment is shown to estimate a median value of both discharge and frequency of occurrence, while an expected probability curve is shown to estimate a mean frequency of flood years. The attributes and constraints of the two types of curves for various uses are discussed. 

  14. Probability mapping of contaminants

    SciTech Connect

    Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.

    1994-04-01

    Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).

  15. A probabilistic method for estimating system susceptibility to HPM

    SciTech Connect

    Mensing, R.W.

    1989-05-18

    Interruption of the operation of electronic systems by HPM is a stochastic process. Thus, a realistic estimate of system susceptibility to HPM is best expressed in terms of the probability the HPM have an effect on the system (probability of effect). To estimate susceptibility of complex electronic systems by extensive testing is not practical. Thus, it is necessary to consider alternative approaches. One approach is to combine information from extensive low level testing and computer modeling with limited high level field test data. A method for estimating system susceptibility based on a pretest analysis of low level test and computer model data combined with a post test analysis after high level testing is described in this paper. 4 figs.

  16. Measurement Uncertainty and Probability

    NASA Astrophysics Data System (ADS)

    Willink, Robin

    2013-02-01

    Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.

  17. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  18. Probability of relativistic electron trapping by parallel and oblique whistler-mode waves in Earth's radiation belts

    SciTech Connect

    Artemyev, A. V. Vasiliev, A. A.; Neishtadt, A. I.; Mourenas, D.; Krasnoselskikh, V.

    2015-11-15

    We investigate electron trapping by high-amplitude whistler-mode waves propagating at small as well as large angles relative to geomagnetic field lines. The inhomogeneity of the background magnetic field can result in an effective acceleration of trapped particles. Here, we derive useful analytical expressions for the probability of electron trapping by both parallel and oblique waves, paving the way for a full analytical description of trapping effects on the particle distribution. Numerical integrations of particle trajectories allow to demonstrate the accuracy of the derived analytical estimates. For realistic wave amplitudes, the levels of probabilities of trapping are generally comparable for oblique and parallel waves, but they turn out to be most efficient over complementary energy ranges. Trapping acceleration of <100 keV electrons is mainly provided by oblique waves, while parallel waves are responsible for the trapping acceleration of >100 keV electrons.

  19. Adiabatic Hyperspherical Analysis of Realistic Nuclear Potentials

    NASA Astrophysics Data System (ADS)

    Daily, K. M.; Kievsky, Alejandro; Greene, Chris H.

    2015-12-01

    Using the hyperspherical adiabatic method with the realistic nuclear potentials Argonne V14, Argonne V18, and Argonne V18 with the Urbana IX three-body potential, we calculate the adiabatic potentials and the triton bound state energies. We find that a discrete variable representation with the slow variable discretization method along the hyperradial degree of freedom results in energies consistent with the literature. However, using a Laguerre basis results in missing energy, even when extrapolated to an infinite number of basis functions and channels. We do not include the isospin T = 3/2 contribution in our analysis.

  20. Quantum states prepared by realistic entanglement swapping

    SciTech Connect

    Scherer, Artur; Howard, Regina B.; Sanders, Barry C.; Tittel, Wolfgang

    2009-12-15

    Entanglement swapping between photon pairs is a fundamental building block in schemes using quantum relays or quantum repeaters to overcome the range limits of long-distance quantum key distribution. We develop a closed-form solution for the actual quantum states prepared by realistic entanglement swapping, which takes into account experimental deficiencies due to inefficient detectors, detector dark counts, and multiphoton-pair contributions of parametric down-conversion sources. We investigate how the entanglement present in the final state of the remaining modes is affected by the real-world imperfections. To test the predictions of our theory, comparison with previously published experimental entanglement swapping is provided.

  1. Realist model approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hájíček, P.

    2013-06-01

    The paper proves that quantum mechanics is compatible with the constructive realism of modern philosophy of science. The proof is based on the observation that properties of quantum systems that are uniquely determined by their preparations can be assumed objective without the difficulties that are encountered by the same assumption about values of observables. The resulting realist interpretation of quantum mechanics is made rigorous by studying the space of quantum states—the convex set of state operators. Prepared states are classified according to their statistical structure into indecomposable and decomposable instead of pure and mixed. Simple objective properties are defined and showed to form a Boolean lattice.

  2. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  3. Oil spill contamination probability in the southeastern Levantine basin.

    PubMed

    Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam

    2015-02-15

    Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. PMID:25534630

  4. 14 CFR 417.224 - Probability of failure analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., must account for launch vehicle failure probability in a consistent manner. A launch vehicle failure... probabilistically valid. For a launch vehicle with fewer than two flights, the failure probability estimate must... circumstances. For a launch vehicle with two or more flights, launch vehicle failure probability......

  5. Realistic Ground Motion Scenarios: Methodological Approach

    SciTech Connect

    Nunziata, C.; Peresan, A.; Romanelli, F.; Vaccari, F.; Zuccolo, E.; Panza, G. F.

    2008-07-08

    The definition of realistic seismic input can be obtained from the computation of a wide set of time histories, corresponding to possible seismotectonic scenarios. The propagation of the waves in the bedrock from the source to the local laterally varying structure is computed with the modal summation technique, while in the laterally heterogeneous structure the finite difference method is used. The definition of shear wave velocities within the soil cover is obtained from the non-linear inversion of the dispersion curve of group velocities of Rayleigh waves, artificially or naturally generated. Information about the possible focal mechanisms of the sources can be obtained from historical seismicity, based on earthquake catalogues and inversion of isoseismal maps. In addition, morphostructural zonation and pattern recognition of seismogenic nodes is useful to identify areas prone to strong earthquakes, based on the combined analysis of topographic, tectonic, geological maps and satellite photos. We show that the quantitative knowledge of regional geological structures and the computation of realistic ground motion can be a powerful tool for a preventive definition of the seismic hazard in Italy. Then, the formulation of reliable building codes, based on the evaluation of the main potential earthquakes, will have a great impact on the effective reduction of the seismic vulnerability of Italian urban areas, validating or improving the national building code.

  6. Realistic magnetohydrodynamical simulation of solar local supergranulation

    NASA Astrophysics Data System (ADS)

    Ustyugov, Sergey D.

    2010-12-01

    Three-dimensional numerical simulations of solar surface magnetoconvection using realistic model physics are conducted. The thermal structure of convective motions into the upper radiative layers of the photosphere, the main scales of convective cells and the penetration depths of convection are investigated. We take part of the solar photosphere with a size of 60×60 Mm2 in the horizontal direction and of depth 20 Mm from the level of the visible solar surface. We use a realistic initial model of the sun and apply the equation of state and opacities of stellar matter. The equations of fully compressible radiation magnetohydrodynamics (MHD) with dynamical viscosity and gravity are solved. We apply (i) the conservative total variation diminishing (TVD) difference scheme for MHD, (ii) the diffusion approximation for radiative transfer and (iii) dynamical viscosity from subgrid-scale modeling. In simulation, we take a uniform two-dimensional grid in the horizontal plane and a nonuniform grid in the vertical direction with the number of cells being 600×600×204. We use 512 processors with distributed memory multiprocessors on the supercomputer MVS-100k at the Joint Computational Centre of the Russian Academy of Sciences.

  7. The realist interpretation of the atmosphere

    NASA Astrophysics Data System (ADS)

    Anduaga, Aitor

    The discovery of a clearly stratified structure of layers in the upper atmosphere has been--and still is--invoked too often as the great paradigm of atmospheric sciences in the 20th century. Behind this vision, an emphasis--or better, an overstatement--on the reality of the concept of layer lies. One of the few historians of physics who have not ignored this phenomenon of reification, C. Stewart Gillmor, attributed it to--somewhat ambiguous-- cultural (or perhaps, more generally, contextual) factors, though he never specified their nature. In this essay, I aim to demonstrate that, in the interwar years, most radiophysicists and some atomic physicists, for reasons principally related to extrinsic influences and to a lesser extent to internal developments of their own science, fervidly embraced a realist interpretation of the ionosphere. We will focus on the historical circumstances in which a specific social and commercial environment came to exert a strong influence on upper atmospheric physicists, and in which realism as a product validating the "truth" of certain practices and beliefs arose. This realist commitment I attribute to the mutual reinforcement of atmospheric physics and commercial and imperial interests in long-distance communications.

  8. Realistic Radio Communications in Pilot Simulator Training

    NASA Technical Reports Server (NTRS)

    Burki-Cohen, Judith; Kendra, Andrew J.; Kanki, Barbara G.; Lee, Alfred T.

    2000-01-01

    Simulators used for total training and evaluation of airline pilots must satisfy stringent criteria in order to assure their adequacy for training and checking maneuvers. Air traffic control and company radio communications simulation, however, may still be left to role-play by the already taxed instructor/evaluators in spite of their central importance in every aspect of the flight environment. The underlying premise of this research is that providing a realistic radio communications environment would increase safety by enhancing pilot training and evaluation. This report summarizes the first-year efforts of assessing the requirement and feasibility of simulating radio communications automatically. A review of the training and crew resource/task management literature showed both practical and theoretical support for the need for realistic radio communications simulation. A survey of 29 instructor/evaluators from 14 airlines revealed that radio communications are mainly role-played by the instructor/evaluators. This increases instructor/evaluators' own workload while unrealistically lowering pilot communications load compared to actual operations, with a concomitant loss in training/evaluation effectiveness. A technology review searching for an automated means of providing radio communications to and from aircraft with minimal human effort showed that while promising, the technology is still immature. Further research and the need for establishing a proof-of-concept are also discussed.

  9. Emptiness Formation Probability

    NASA Astrophysics Data System (ADS)

    Crawford, Nicholas; Ng, Stephen; Starr, Shannon

    2016-08-01

    We present rigorous upper and lower bounds on the emptiness formation probability for the ground state of a spin-1/2 Heisenberg XXZ quantum spin system. For a d-dimensional system we find a rate of decay of the order {exp(-c L^{d+1})} where L is the sidelength of the box in which we ask for the emptiness formation event to occur. In the {d=1} case this confirms previous predictions made in the integrable systems community, though our bounds do not achieve the precision predicted by Bethe ansatz calculations. On the other hand, our bounds in the case {d ≥ 2} are new. The main tools we use are reflection positivity and a rigorous path integral expansion, which is a variation on those previously introduced by Toth, Aizenman-Nachtergaele and Ueltschi.

  10. Climate Sensitivity to Realistic Solar Heating of Snow and Ice

    NASA Astrophysics Data System (ADS)

    Flanner, M.; Zender, C. S.

    2004-12-01

    Snow and ice-covered surfaces are highly reflective and play an integral role in the planetary radiation budget. However, GCMs typically prescribe snow reflection and absorption based on minimal knowledge of snow physical characteristics. We performed climate sensitivity simulations with the NCAR CCSM including a new physically-based multi-layer snow radiative transfer model. The model predicts the effects of vertically resolved heating, absorbing aerosol, and snowpack transparency on snowpack evolution and climate. These processes significantly reduce the model's near-infrared albedo bias over deep snowpacks. While the current CCSM implementation prescribes all solar radiative absorption to occur in the top 2 cm of snow, we estimate that about 65% occurs beneath this level. Accounting for the vertical distribution of snowpack heating and more realistic reflectance significantly alters snowpack depth, surface albedo, and surface air temperature over Northern Hemisphere regions. Implications for the strength of the ice-albedo feedback will be discussed.

  11. Characteristic length of the knotting probability revisited

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2015-09-01

    We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.

  12. Inclusion probability with dropout: an operational formula.

    PubMed

    Milot, E; Courteau, J; Crispino, F; Mailly, F

    2015-05-01

    In forensic genetics, a mixture of two or more contributors to a DNA profile is often interpreted using the inclusion probabilities theory. In this paper, we present a general formula for estimating the probability of inclusion (PI, also known as the RMNE probability) from a subset of visible alleles when dropouts are possible. This one-locus formula can easily be extended to multiple loci using the cumulative probability of inclusion. We show that an exact formulation requires fixing the number of contributors, hence to slightly modify the classic interpretation of the PI. We discuss the implications of our results for the enduring debate over the use of PI vs likelihood ratio approaches within the context of low template amplifications. PMID:25559642

  13. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma. PMID:27477933

  14. HELIOSEISMOLOGY OF A REALISTIC MAGNETOCONVECTIVE SUNSPOT SIMULATION

    SciTech Connect

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L. Jr. E-mail: aaronb@cora.nwra.com E-mail: Thomas.L.Duvall@nasa.gov

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  15. Helioseismology of a Realistic Magnetoconvective Sunspot Simulation

    NASA Technical Reports Server (NTRS)

    Braun, D. C.; Birch, A. C.; Rempel, M.; Duvall, T. L., Jr.

    2012-01-01

    We compare helioseismic travel-time shifts measured from a realistic magnetoconvective sunspot simulation using both helioseismic holography and time-distance helioseismology, and measured from real sunspots observed with the Helioseismic and Magnetic Imager instrument on board the Solar Dynamics Observatory and the Michelson Doppler Imager instrument on board the Solar and Heliospheric Observatory. We find remarkable similarities in the travel-time shifts measured between the methodologies applied and between the simulated and real sunspots. Forward modeling of the travel-time shifts using either Born or ray approximation kernels and the sound-speed perturbations present in the simulation indicates major disagreements with the measured travel-time shifts. These findings do not substantially change with the application of a correction for the reduction of wave amplitudes in the simulated and real sunspots. Overall, our findings demonstrate the need for new methods for inferring the subsurface structure of sunspots through helioseismic inversions.

  16. Field line resonances in a realistic magnetosphere

    SciTech Connect

    Mukherjee, G.K.; Rajaram, R. )

    1989-04-01

    An internally consistent theoretical framework is developed to study the field line oscillations in the realistic magnetospheric magnetic field using the Mead and Fairfield (1975) model. The nondipolar contributions are numerically computed for the fundamental period of the modes that would reduce to the localized toroidal and poloidal modes described by Cummings et al. (1969) in the dipole limit. It is shown that the nondipolar contributions are not significant at the geostationary orbit but become large further out in the magnetosphere. The nondipolar contributions are very different for the two modes. The situation becomes very much more complicated in the dawn/dusk region where a continuous range of periods exist depending on the orientation of the field line oscillation.

  17. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  18. Realistic limits for subpixel movement detection.

    PubMed

    Mas, David; Perez, Jorge; Ferrer, Belen; Espinosa, Julian

    2016-07-01

    Object tracking with subpixel accuracy is of fundamental importance in many fields since it provides optimal performance at relatively low cost. Although there are many theoretical proposals that lead to resolution increments of several orders of magnitude, in practice this resolution is limited by the imaging systems. In this paper we propose and demonstrate through simple numerical models a realistic limit for subpixel accuracy. The final result is that maximum achievable resolution enhancement is connected with the dynamic range of the image, i.e., the detection limit is 1/2(nr.bits). The results here presented may aid in proper design of superresolution experiments in microscopy, surveillance, defense, and other fields. PMID:27409179

  19. Will Realistic Fossil Fuel Burning Scenarios Prevent Catastrophic Climate Change?

    NASA Astrophysics Data System (ADS)

    Tans, P. P.; Rutledge, D.

    2012-12-01

    In the IPCC Special Report on Emissions Scenarios the driving forces are almost entirely demographic and socio-economic, with scant attention given to potential resource limitations. In a recent study D. Rutledge (2011) shows that in the case of historical coal production, a stable estimate, typically much lower than early estimates of reserves, of total long term production of a region can be obtained well before peak production is reached based on actual production numbers until that point. The estimates are based on produced quantities only, and appear to contradict the assumption of dominant control by socio-economic factors and improvements in technology. Therefore, a projection of climate forcing based on a emissions scenario close to the lowest of the IPCC scenarios may be more realistic. The longevity of the CO2 enhancement in the atmosphere and oceans is thousands of years. The partitioning of the CO2 enhancement between atmosphere and oceans, and thus climate forcing by CO2, is calculated until the year 2500. The fundamental difficulty of CO2 removal strategies is pointed out. The integral of climate forcing until 2500 under a low emissions scenario is still so large that climate change may become an impediment to human development in addition to higher energy costs. D. Rutledge, International J. Coal Geology 85, 23-33 (2011).

  20. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  1. Two Realistic Beagle Models for Dose Assessment.

    PubMed

    Stabin, Michael G; Kost, Susan D; Segars, William P; Guilmette, Raymond A

    2015-09-01

    Previously, the authors developed a series of eight realistic digital mouse and rat whole body phantoms based on NURBS technology to facilitate internal and external dose calculations in various species of rodents. In this paper, two body phantoms of adult beagles are described based on voxel images converted to NURBS models. Specific absorbed fractions for activity in 24 organs are presented in these models. CT images were acquired of an adult male and female beagle. The images were segmented, and the organs and structures were modeled using NURBS surfaces and polygon meshes. Each model was voxelized at a resolution of 0.75 × 0.75 × 2 mm. The voxel versions were implemented in GEANT4 radiation transport codes to calculate specific absorbed fractions (SAFs) using internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in both models. The SAFs for photons and electrons were compatible with results observed by others. Absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of these small-sized organs, and measurable cross irradiation was observed for many organ pairs for high-energy electrons (as would be emitted by nuclides like 32P, 90Y, or 188Re). The SAFs were used with standardized decay data to develop dose factors (DFs) for radiation dose calculations using the RADAR Method. These two new realistic models of male and female beagle dogs will be useful in radiation dosimetry calculations for external or internal simulated sources. PMID:26222214

  2. Demonstrating a Realistic IP Mission Prototype

    NASA Technical Reports Server (NTRS)

    Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith

    2003-01-01

    Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.

  3. Compact entanglement distillery using realistic quantum memories

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, Levon; Guérin, Stéphane; Nunn, Joshua; Datta, Animesh

    2013-10-01

    We adopt the beam-splitter model for losses to analyze the performance of a recent compact continuous-variable entanglement distillation protocol [A. Datta , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.108.060502 108, 060502 (2012)] implemented using realistic quantum memories. We show that the decoherence undergone by a two-mode squeezed state while stored in a quantum memory can strongly modify the results of the preparatory step of the protocol. We find that the well-known method for locally increasing entanglement, phonon subtraction, may not result in entanglement gain when losses are taken into account. Thus, we investigate the critical number mc of phonon subtraction attempts from the matter modes of the quantum memory. If the initial state is not de-Gaussified within mc attempts, the protocol should be restarted to obtain any entanglement increase. Moreover, the condition mc>1 implies an additional constraint on the subtraction beam-splitter interaction transmissivity, viz., it should be about 50% for a wide range of protocol parameters. Additionally, we consider the average entanglement rate, which takes into account both the unavoidable probabilistic nature of the protocol and its possible failure as a result of a large number of unsuccessful subtraction attempts. We find that a higher value of the average entanglement can be achieved by increasing the subtraction beam-splitter interaction transmissivity. We conclude that the compact distillation protocol with the practical constraints coming from realistic quantum memories allows a feasible experimental realization within existing technologies.

  4. On the probability of exceeding allowable leak rates through degraded steam generator tubes

    SciTech Connect

    Cizelj, L.; Sorsek, I.; Riesch-Oppermann, H.

    1997-02-01

    This paper discusses some possible ways of predicting the behavior of the total leak rate through the damaged steam generator tubes. This failure mode is of special concern in cases where most through-wall defects may remain In operation. A particular example is the application of alternate (bobbin coil voltage) plugging criterion to Outside Diameter Stress Corrosion Cracking at the tube support plate intersections. It is the authors aim to discuss some possible modeling options that could be applied to solve the problem formulated as: Estimate the probability that the sum of all individual leak rates through degraded tubes exceeds the predefined acceptable value. The probabilistic approach is of course aiming at reliable and computationaly bearable estimate of the failure probability. A closed form solution is given for a special case of exponentially distributed individual leak rates. Also, some possibilities for the use of computationaly efficient First and Second Order Reliability Methods (FORM and SORM) are discussed. The first numerical example compares the results of approximate methods with closed form results. SORM in particular shows acceptable agreement. The second numerical example considers a realistic case of NPP in Krsko, Slovenia.

  5. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  6. MSPI False Indication Probability Simulations

    SciTech Connect

    Dana Kelly; Kurt Vedros; Robert Youngblood

    2011-03-01

    This paper examines false indication probabilities in the context of the Mitigating System Performance Index (MSPI), in order to investigate the pros and cons of different approaches to resolving two coupled issues: (1) sensitivity to the prior distribution used in calculating the Bayesian-corrected unreliability contribution to the MSPI, and (2) whether (in a particular plant configuration) to model the fuel oil transfer pump (FOTP) as a separate component, or integrally to its emergency diesel generator (EDG). False indication probabilities were calculated for the following situations: (1) all component reliability parameters at their baseline values, so that the true indication is green, meaning that an indication of white or above would be false positive; (2) one or more components degraded to the extent that the true indication would be (mid) white, and “false” would be green (negative) or yellow (negative) or red (negative). In key respects, this was the approach taken in NUREG-1753. The prior distributions examined were the constrained noninformative (CNI) prior used currently by the MSPI, a mixture of conjugate priors, the Jeffreys noninformative prior, a nonconjugate log(istic)-normal prior, and the minimally informative prior investigated in (Kelly et al., 2010). The mid-white performance state was set at ?CDF = ?10 ? 10-6/yr. For each simulated time history, a check is made of whether the calculated ?CDF is above or below 10-6/yr. If the parameters were at their baseline values, and ?CDF > 10-6/yr, this is counted as a false positive. Conversely, if one or all of the parameters are set to values corresponding to ?CDF > 10-6/yr but that time history’s ?CDF < 10-6/yr, this is counted as a false negative indication. The false indication (positive or negative) probability is then estimated as the number of false positive or negative counts divided by the number of time histories (100,000). Results are presented for a set of base case parameter values

  7. Explosion probability of unexploded ordnance: expert beliefs.

    PubMed

    MacDonald, Jacqueline Anne; Small, Mitchell J; Morgan, M G

    2008-08-01

    This article reports on a study to quantify expert beliefs about the explosion probability of unexploded ordnance (UXO). Some 1,976 sites at closed military bases in the United States are contaminated with UXO and are slated for cleanup, at an estimated cost of $15-140 billion. Because no available technology can guarantee 100% removal of UXO, information about explosion probability is needed to assess the residual risks of civilian reuse of closed military bases and to make decisions about how much to invest in cleanup. This study elicited probability distributions for the chance of UXO explosion from 25 experts in explosive ordnance disposal, all of whom have had field experience in UXO identification and deactivation. The study considered six different scenarios: three different types of UXO handled in two different ways (one involving children and the other involving construction workers). We also asked the experts to rank by sensitivity to explosion 20 different kinds of UXO found at a case study site at Fort Ord, California. We found that the experts do not agree about the probability of UXO explosion, with significant differences among experts in their mean estimates of explosion probabilities and in the amount of uncertainty that they express in their estimates. In three of the six scenarios, the divergence was so great that the average of all the expert probability distributions was statistically indistinguishable from a uniform (0, 1) distribution-suggesting that the sum of expert opinion provides no information at all about the explosion risk. The experts' opinions on the relative sensitivity to explosion of the 20 UXO items also diverged. The average correlation between rankings of any pair of experts was 0.41, which, statistically, is barely significant (p= 0.049) at the 95% confidence level. Thus, one expert's rankings provide little predictive information about another's rankings. The lack of consensus among experts suggests that empirical studies

  8. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  9. The Probability of Causal Conditionals

    ERIC Educational Resources Information Center

    Over, David E.; Hadjichristidis, Constantinos; Evans, Jonathan St. B. T.; Handley, Simon J.; Sloman, Steven A.

    2007-01-01

    Conditionals in natural language are central to reasoning and decision making. A theoretical proposal called the Ramsey test implies the conditional probability hypothesis: that the subjective probability of a natural language conditional, P(if p then q), is the conditional subjective probability, P(q [such that] p). We report three experiments on…

  10. Experiment design for measuring the probability of detection in remote sensing: how many objects and how many passes

    NASA Astrophysics Data System (ADS)

    Torrione, Peter A.; Collins, Leslie M.; Morton, Kenneth D.

    2014-05-01

    Buried threat detection system (e.g., GPR, FLIR, EMI) performance can be summarized through two related statistics: the probability of detection (PD), and the false alarm rate (FAR). These statistics impact system rate of forward advance, clearance probability, and the overall usefulness of the system. Understanding system PD and FAR for each target type of interest is fundamental to making informed decisions regarding system procurement and deployment. Since PD and FAR cannot be measured directly, proper experimental design is required to ensure that estimates of PD and FAR are accurate. Given an unlimited number of target emplacements, estimating PD is straightforward. However in realistic scenarios with constrained budgets, limited experimental collection time and space, and limited number of targets, estimating PD becomes significantly more complicated. For example, it may be less expensive to collect data over the same exact target emplacement multiple times than to collect once over multiple unique target emplacements. Clearly there is a difference between the quantity and value of the information obtained from these two experiments (one collection over multiple objects, and multiple collections over one particular object). This work will clarify and quantify the amount of information gained from multiple data collections over one target compared to collecting over multiple unique target burials. Results provide a closed-form solution to estimating the relative value of collecting multiple times over one object, or emplacing a new object, and how to optimize experimental design to achieve stated goals and simultaneously minimize cost.

  11. Computer simulations of realistic three-dimensional microstructures

    NASA Astrophysics Data System (ADS)

    Mao, Yuxiong

    A novel and efficient methodology is developed for computer simulations of realistic two-dimensional (2D) and three-dimensional (3D) microstructures. The simulations incorporate realistic 2D and 3D complex morphologies/shapes, spatial patterns, anisotropy, volume fractions, and size distributions of the microstructural features statistically similar to those in the corresponding real microstructures. The methodology permits simulations of sufficiently large 2D as well as 3D microstructural windows that incorporate short-range (on the order of particle/feature size) as well as long-range (hundred times the particle/feature size) microstructural heterogeneities and spatial patterns at high resolution. The utility of the technique has been successfully demonstrated through its application to the 2D microstructures of the constituent particles in wrought Al-alloys, the 3D microstructure of discontinuously reinforced Al-alloy (DRA) composites containing SiC particles that have complex 3D shapes/morphologies and spatial clustering, and 3D microstructure of boron modified Ti-6Al-4V composites containing fine TiB whiskers and coarse primary TiB particles. The simulation parameters are correlated with the materials processing parameters (such as composition, particle size ratio, extrusion ratio, extrusion temperature, etc.), which enables the simulations of rational virtual 3D microstructures for the parametric studies on microstructure-properties relationships. The simulated microstructures have been implemented in the 3D finite-elements (FE)-based framework for simulations of micro-mechanical response and stress-strain curves. Finally, a new unbiased and assumption free dual-scale virtual cycloids probe for estimating surface area of 3D objects constructed by 2D serial section images is also presented.

  12. Differentiability of correlations in realistic quantum mechanics

    SciTech Connect

    Cabrera, Alejandro; Faria, Edson de; Pujals, Enrique; Tresser, Charles

    2015-09-15

    We prove a version of Bell’s theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumed to be twice differentiable at the origin, then we arrive at a version of Bell’s theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.

  13. Differentiability of correlations in realistic quantum mechanics

    NASA Astrophysics Data System (ADS)

    Cabrera, Alejandro; de Faria, Edson; Pujals, Enrique; Tresser, Charles

    2015-09-01

    We prove a version of Bell's theorem in which the locality assumption is weakened. We start by assuming theoretical quantum mechanics and weak forms of relativistic causality and of realism (essentially the fact that observable values are well defined independently of whether or not they are measured). Under these hypotheses, we show that only one of the correlation functions that can be formulated in the framework of the usual Bell theorem is unknown. We prove that this unknown function must be differentiable at certain angular configuration points that include the origin. We also prove that, if this correlation is assumed to be twice differentiable at the origin, then we arrive at a version of Bell's theorem. On the one hand, we are showing that any realistic theory of quantum mechanics which incorporates the kinematic aspects of relativity must lead to this type of rough correlation function that is once but not twice differentiable. On the other hand, this study brings us a single degree of differentiability away from a relativistic von Neumann no hidden variables theorem.

  14. Real-time realistic skin translucency.

    PubMed

    Jimenez, Jorge; Whelan, David; Sundstedt, Veronica; Gutierrez, Diego

    2010-01-01

    Diffusion theory allows the production of realistic skin renderings. The dipole and multipole models allow for solving challenging diffusion-theory equations efficiently. By using texture-space diffusion, a Gaussian-based approximation, and programmable graphics hardware, developers can create real-time, photorealistic skin renderings. Performing this diffusion in screen space offers advantages that make diffusion approximation practical in scenarios such as games, where having the best possible performance is crucial. However, unlike the texture-space counterpart, the screen-space approach can't simulate transmittance of light through thin geometry; it yields unrealistic results in those cases. A new transmittance algorithm turns the screen-space approach into an efficient global solution, capable of simulating both reflectance and transmittance of light through a multilayered skin model. The transmittance calculations are derived from physical equations, which are implemented through simple texture access. The method performs in real time, requiring no additional memory usage and only minimal additional processing power and memory bandwidth. Despite its simplicity, this practical model manages to reproduce the look of images rendered with other techniques (both offline and real time) such as photon mapping or diffusion approximation. PMID:20650726

  15. Alveolar mechanics using realistic acinar models

    NASA Astrophysics Data System (ADS)

    Kumar, Haribalan; Lin, Ching-Long; Tawhai, Merryn H.; Hoffman, Eric A.

    2009-11-01

    Accurate modeling of the mechanics in terminal airspaces of the lung is desirable for study of particle transport and pathology. The flow in the acinar region is traditionally studied by employing prescribed boundary conditions to represent rhythmic breathing and volumetric expansion. Conventional models utilize simplified spherical or polygonal units to represent the alveolar duct and sac. Accurate prediction of flow and transport characteristics may require geometries reconstructed from CT-based images and serve to understand the importance of physiologically realistic representation of the acinus. In this effort, we present a stabilized finite element framework, supplemented with appropriate boundary conditions at the alveolar mouth and septal borders for simulation of the alveolar mechanics and the resulting airflow. Results of material advection based on Lagrangian tracking are presented to complete the study of transport and compare the results with simplified acinar models. The current formulation provides improved understanding and realization of a dynamic framework for parenchymal mechanics with incorporation of alveolar pressure and traction stresses.

  16. Fast sawtooth reconnection at realistic Lundquist numbers

    NASA Astrophysics Data System (ADS)

    Günter, S.; Yu, Q.; Lackner, K.; Bhattacharjee, A.; Huang, Y.-M.

    2015-01-01

    Magnetic reconnection, a ubiquitous phenomenon in astrophysics, space science and magnetic confinement research, frequently proceeds much faster than predicted by simple resistive MHD theory. Acceleration can result from the break-up of the thin Sweet-Parker current sheet into plasmoids, or from two-fluid effects decoupling mass and magnetic flux transport over the ion inertial length {{v}A}/{ωci} or the drift scale \\sqrt{{{T}e}/{{m}i}}/{ωci}, depending on the absence or presence of a strong magnetic guide field. We describe new results on the modelling of sawtooth reconnection in a simple tokamak geometry (circular cylindrical equilibrium) pushed to realistic Lundquist numbers for present day tokamaks. For the resistive MHD case, the onset criteria and the influence of plasmoids on the reconnection process agree well with earlier results found in the case of vanishing magnetic guide fields. While plasmoids are also observed in two-fluid calculations, they do not dominate the reconnection process for the range of plasma parameters considered in this study. In the two-fluid case they form as a transient phenomenon only. The reconnection times become weakly dependent on the S-value and for the most complete model—including two-fluid effects and equilibrium temperature and density gradients—agree well with those experimentally found on ASDEX Upgrade ≤ft(≤slant 100 μ s\\right).

  17. Determination of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Dietrich, Daniel L.; Ruff, Gary A.; Urban, David

    2013-01-01

    This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.

  18. Realistic calculation of the hep astrophysical factor

    SciTech Connect

    L.E. Marcucci; R. Schiavilla; M. Viviani; A. Kievsky; S. Rosati

    2000-03-01

    The astrophysical factor for the proton weak capture on {sup 3}He is calculated with correlated-hyperspherical-harmonics bound and continuum wave functions corresponding to a realistic Hamiltonian consisting of the Argonne {nu}{sub 18} two-nucleon and Urbana-IX three-nucleon interactions. The nuclear weak charge and current operators have vector and axial-vector components, that include one- and many-body terms. All possible multipole transitions connecting any of the p{sup 3}He S- and P-wave channels to the {sup 4}He bound state are considered. The S-factor at a p{sup 3}He center-of-mass energy of 10 keV, close to the Gamow-peak energy, is predicted to be 10.1 x 10{sup {minus}20} keV b, a factor of five larger than the standard-solar-model value. The P-wave transitions are found to be important, contributing about 40 % of the calculated S-factor.

  19. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  20. Quantum probability and many worlds

    NASA Astrophysics Data System (ADS)

    Hemmo, Meir; Pitowsky, Itamar

    We discuss the meaning of probabilities in the many worlds interpretation of quantum mechanics. We start by presenting very briefly the many worlds theory, how the problem of probability arises, and some unsuccessful attempts to solve it in the past. Then we criticize a recent attempt by Deutsch to derive the quantum mechanical probabilities from the non-probabilistic parts of quantum mechanics and classical decision theory. We further argue that the Born probability does not make sense even as an additional probability rule in the many worlds theory. Our conclusion is that the many worlds theory fails to account for the probabilistic statements of standard (collapse) quantum mechanics.

  1. On the Realistic Stochastic Model of GPS Observables: Implementation and Performance

    NASA Astrophysics Data System (ADS)

    Zangeneh-Nejad, F.; Amiri-Simkooei, A. R.; Sharifi, M. A.; Asgari, J.

    2015-12-01

    High-precision GPS positioning requires a realistic stochastic model of observables. A realistic GPS stochastic model of observables should take into account different variances for different observation types, correlations among different observables, the satellite elevation dependence of observables precision, and the temporal correlation of observables. Least-squares variance component estimation (LS-VCE) is applied to GPS observables using the geometry-based observation model (GBOM). To model the satellite elevation dependent of GPS observables precision, an exponential model depending on the elevation angles of the satellites are also employed. Temporal correlation of the GPS observables is modelled by using a first-order autoregressive noise model. An important step in the high-precision GPS positioning is double difference integer ambiguity resolution (IAR). The fraction or percentage of success among a number of integer ambiguity fixing is called the success rate. A realistic estimation of the GNSS observables covariance matrix plays an important role in the IAR. We consider the ambiguity resolution success rate for two cases, namely a nominal and a realistic stochastic model of the GPS observables using two GPS data sets collected by the Trimble R8 receiver. The results confirm that applying a more realistic stochastic model can significantly improve the IAR success rate on individual frequencies, either on L1 or on L2. An improvement of 20% was achieved to the empirical success rate results. The results also indicate that introducing the realistic stochastic model leads to a larger standard deviation for the baseline components by a factor of about 2.6 on the data sets considered.

  2. Physics and Probability

    NASA Astrophysics Data System (ADS)

    Grandy, W. T., Jr.; Milonni, P. W.

    2004-12-01

    Preface; 1. Recollection of an independent thinker Joel A. Snow; 2. A look back: early applications of maximum entropy estimation to quantum statistical mechanics D. J. Scalapino; 3. The Jaynes-Cummings revival B. W. Shore and P. L. Knight; 4. The Jaynes-Cummings model and the one-atom-master H. Walther; 5. The Jaynes-Cummings model is alive and well P. Meystre; 6. Self-consistent radiation reaction in quantum optics - Jaynes' influence and a new example in cavity QED J. H. Eberly; 7. Enhancing the index of refraction in a nonabsorbing medium: phaseonium versus a mixture of two-level atoms M. O. Scully, T. W. Hänsch, M. Fleischhauer, C. H. Keitel and Shi-Yao Zhu; 8. Ed Jaynes' steak dinner problem II Michael D. Crisp; 9. Source theory of vacuum field effects Peter W. Milonni; 10. The natural line shape Edwin A. Power; 11. An operational approach to Schrödinger's cat L. Mandel; 12. The classical limit of an atom C. R. Stroud, Jr.; 13. Mutual radiation reaction in spontaneous emission Richard J. Cook; 14. A model of neutron star dynamics F. W. Cummings; 15. The kinematic origin of complex wave function David Hestenes; 16. On radar target identification C. Ray Smith; 17. On the difference in means G. Larry Bretthorst; 18. Bayesian analysis, model selection and prediction Arnold Zellner and Chung-ki Min; 19. Bayesian numerical analysis John Skilling; 20. Quantum statistical inference R. N. Silver; 21. Application of the maximum entropy principle to nonlinear systems far from equilibrium H. Haken; 22. Nonequilibrium statistical mechanics Baldwin Robertson; 23. A backward look to the future E. T. James; Appendix. Vita and bibliography of Edwin T. Jaynes; Index.

  3. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  4. Pointwise probability reinforcements for robust statistical inference.

    PubMed

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. PMID:24300550

  5. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  6. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non

  7. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  8. Approaches to Evaluating Probability of Collision Uncertainty

    NASA Technical Reports Server (NTRS)

    Hejduk, Matthew D.; Johnson, Lauren C.

    2016-01-01

    While the two-dimensional probability of collision (Pc) calculation has served as the main input to conjunction analysis risk assessment for over a decade, it has done this mostly as a point estimate, with relatively little effort made to produce confidence intervals on the Pc value based on the uncertainties in the inputs. The present effort seeks to try to carry these uncertainties through the calculation in order to generate a probability density of Pc results rather than a single average value. Methods for assessing uncertainty in the primary and secondary objects' physical sizes and state estimate covariances, as well as a resampling approach to reveal the natural variability in the calculation, are presented; and an initial proposal for operationally-useful display and interpretation of these data for a particular conjunction is given.

  9. Comparison of validity of food group intake by food frequency questionnaire between pre- and post- adjustment estimates derived from 2-day 24-hour recalls in combination with the probability of consumption.

    PubMed

    Kim, Dong Woo; Oh, Se-Young; Kwon, Sung-Ok; Kim, Jeongseon

    2012-01-01

    Validation of a food frequency questionnaire (FFQ) utilising a short-term measurement method is challenging when the reference method does not accurately reflect the usual food intake. In addition, food group intake that is not consumed on daily basis is more critical when episodically consumed foods are related and compared. To overcome these challenges, several statistical approaches have been developed to determine usual food intake distributions. The Multiple Source Method (MSM) can calculate the usual food intake by combining the frequency questions of an FFQ with the short-term food intake amount data. In this study, we applied the MSM to estimate the usual food group intake and evaluate the validity of an FFQ with a group of 333 Korean children (aged 3-6 y) who completed two 24-hour recalls (24HR) and one FFQ in 2010. After adjusting the data using the MSM procedure, the true rate of non-consumption for all food groups was less than 1% except for the beans group. The median Spearman correlation coefficients against FFQ of the mean of 2-d 24HRs data and the MSM-adjusted data were 0.20 (range: 0.11 to 0.40) and 0.35 (range: 0.14 to 0.60), respectively. The weighted kappa values against FFQ ranged from 0.08 to 0.25 for the mean of 2-d 24HRs data and from 0.10 to 0.41 for the MSM-adjusted data. For most food groups, the MSM-adjusted data showed relatively stronger correlations against FFQ than raw 2-d 24HRs data, from 0.03 (beverages) to 0.34 (mushrooms). The results of this study indicated that the application of the MSM, which was a better estimate of the usual intake, could be worth considering in FFQ validation studies among Korean children. PMID:22938437

  10. The probability of tropical cyclone landfalls in Western North Pacific

    NASA Astrophysics Data System (ADS)

    Bonazzi, A.; Bellone, E.; Khare, S.

    2012-04-01

    The Western North Pacific (WNP) is the most active basin in terms of tropical cyclone and typhoon occurrences. The densely populated countries that form the western boundary of WNP basin -- e.g. China, Japan and the Philippines -- are exposed to extreme wind gusts, storm surge and fresh water flooding eventually triggered by Tropical Cyclones (TC) events. Event-based catastrophe models (hereafter cat models) are extensively used by the insurance industry to manage their exposure against low-frequency/high-consequence events such as natural catastrophes. Cat models provide their users with a realistic set of stochastic events that expands the scope of a historical catalogue. Confidence in a cat model ability to extrapolate peril and loss statistics beyond the period covered by observational data requires good agreement between stochastic and historical peril characteristics at shorter return periods. In WNP risk management practitioners are faced with highly uncertain data to base their decisions. Albeit 4 national agencies maintain best track catalogues, data are generally based on satellite imageries with very limited central pressure (CP) and maximum velocity (VMAX) measurements -- regular flight reconnaissance missions stopped in 1987. As a result differences up to 20 knots are found in estimates of VMAX from different agencies as documented in experiment IOP-10 during Typhoon Megi in 2010. In this work we present a comprehensive analysis of CP and VMAX probability distributions at landfall across the WNP basin along a set of 150 gates (100 km coast segments) based on best track catalogues from Japan Meteorological Agency, Joint Typhoon Warning Center, China Meteorological Agency and Hong Meteorological Agency. Landfall distributions are then used to calibrate a random-walk statistical track model. A long simulation of 100,000 years of statistical TC tracks will ultimately constitute the central building block of a basin-wide stochastic catalogue of synthetic TC

  11. Non-local crime density estimation incorporating housing information

    PubMed Central

    Woodworth, J. T.; Mohler, G. O.; Bertozzi, A. L.; Brantingham, P. J.

    2014-01-01

    Given a discrete sample of event locations, we wish to produce a probability density that models the relative probability of events occurring in a spatial domain. Standard density estimation techniques do not incorporate priors informed by spatial data. Such methods can result in assigning significant positive probability to locations where events cannot realistically occur. In particular, when modelling residential burglaries, standard density estimation can predict residential burglaries occurring where there are no residences. Incorporating the spatial data can inform the valid region for the density. When modelling very few events, additional priors can help to correctly fill in the gaps. Learning and enforcing correlation between spatial data and event data can yield better estimates from fewer events. We propose a non-local version of maximum penalized likelihood estimation based on the H1 Sobolev seminorm regularizer that computes non-local weights from spatial data to obtain more spatially accurate density estimates. We evaluate this method in application to a residential burglary dataset from San Fernando Valley with the non-local weights informed by housing data or a satellite image. PMID:25288817

  12. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  13. Review of Literature for Model Assisted Probability of Detection

    SciTech Connect

    Meyer, Ryan M.; Crawford, Susan L.; Lareau, John P.; Anderson, Michael T.

    2014-09-30

    This is a draft technical letter report for NRC client documenting a literature review of model assisted probability of detection (MAPOD) for potential application to nuclear power plant components for improvement of field NDE performance estimations.

  14. On the Provenance of Judgments of Conditional Probability

    ERIC Educational Resources Information Center

    Zhao, Jiaying; Shah, Anuj; Osherson, Daniel

    2009-01-01

    In standard treatments of probability, Pr(A[vertical bar]B) is defined as the ratio of Pr(A[intersection]B) to Pr(B), provided that Pr(B) greater than 0. This account of conditional probability suggests a psychological question, namely, whether estimates of Pr(A[vertical bar]B) arise in the mind via implicit calculation of…

  15. The Probabilities of Conditionals Revisited

    ERIC Educational Resources Information Center

    Douven, Igor; Verbrugge, Sara

    2013-01-01

    According to what is now commonly referred to as "the Equation" in the literature on indicative conditionals, the probability of any indicative conditional equals the probability of its consequent of the conditional given the antecedent of the conditional. Philosophers widely agree in their assessment that the triviality arguments of…

  16. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  17. Evolution of migration rate in a spatially realistic metapopulation model.

    PubMed

    Heino, M; Hanski, I

    2001-05-01

    We use an individual-based, spatially realistic metapopulation model to study the evolution of migration rate. We first explore the consequences of habitat change in hypothetical patch networks on a regular lattice. If the primary consequence of habitat change is an increase in local extinction risk as a result of decreased local population sizes, migration rate increases. A nonmonotonic response, with migration rate decreasing at high extinction rate, was obtained only by assuming very frequent catastrophes. If the quality of the matrix habitat deteriorates, leading to increased mortality during migration, the evolutionary response is more complex. As long as habitat patch occupancy does not decrease markedly with increased migration mortality, reduced migration rate evolves. However, once mortality becomes so high that empty patches remain uncolonized for a long time, evolution tends to increase migration rate, which may lead to an "evolutionary rescue" in a fragmented landscape. Kin competition has a quantitative effect on the evolution of migration rate in our model, but these patterns in the evolution of migration rate appear to be primarily caused by spatiotemporal variation in fitness and mortality during migration. We apply the model to real habitat patch networks occupied by two checkerspot butterfly (Melitaea) species, for which sufficient data are available to estimate rigorously most of the model parameters. The model-predicted migration rate is not significantly different from the empirically observed one. Regional variation in patch areas and connectivities leads to regional variation in the optimal migration rate, predictions that can be tested empirically. PMID:18707258

  18. Monte Carlo simulation of scenario probability distributions

    SciTech Connect

    Glaser, R.

    1996-10-23

    Suppose a scenario of interest can be represented as a series of events. A final result R may be viewed then as the intersection of three events, A, B, and C. The probability of the result P(R) in this case is the product P(R) = P(A) P(B {vert_bar} A) P(C {vert_bar} A {intersection} B). An expert may be reluctant to estimate P(R) as a whole yet agree to supply his notions of the component probabilities in the form of prior distributions. Each component prior distribution may be viewed as the stochastic characterization of the expert`s uncertainty regarding the true value of the component probability. Mathematically, the component probabilities are treated as independent random variables and P(R) as their product; the induced prior distribution for P(R) is determined which characterizes the expert`s uncertainty regarding P(R). It may be both convenient and adequate to approximate the desired distribution by Monte Carlo simulation. Software has been written for this task that allows a variety of component priors that experts with good engineering judgment might feel comfortable with. The priors are mostly based on so-called likelihood classes. The software permits an expert to choose for a given component event probability one of six types of prior distributions, and the expert specifies the parameter value(s) for that prior. Each prior is unimodal. The expert essentially decides where the mode is, how the probability is distributed in the vicinity of the mode, and how rapidly it attenuates away. Limiting and degenerate applications allow the expert to be vague or precise.

  19. RADAR Realistic Animal Model Series for Dose Assessment

    PubMed Central

    Keenan, Mary A.; Stabin, Michael G.; Segars, William P.; Fernald, Michael J.

    2010-01-01

    Rodent species are widely used in the testing and approval of new radiopharmaceuticals, necessitating murine phantom models. As more therapy applications are being tested in animal models, calculating accurate dose estimates for the animals themselves becomes important to explain and control potential radiation toxicity or treatment efficacy. Historically, stylized and mathematically based models have been used for establishing doses to small animals. Recently, a series of anatomically realistic human phantoms was developed using body models based on nonuniform rational B-spline. Realistic digital mouse whole-body (MOBY) and rat whole-body (ROBY) phantoms were developed on the basis of the same NURBS technology and were used in this study to facilitate dose calculations in various species of rodents. Methods Voxel-based versions of scaled MOBY and ROBY models were used with the Vanderbilt multinode computing network (Advanced Computing Center for Research and Education), using geometry and tracking radiation transport codes to calculate specific absorbed fractions (SAFs) with internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in all models. Results The SAF results were compared with values from similar studies found in reference literature. Also, the SAFs were used with standardized decay data to develop dose factors to be used in radiation dose calculations. Representative plots were made of photon electron SAFs, evaluating the traditional assumption that all electron energy is absorbed in the source organs. Conclusion The organ masses in the MOBY and ROBY models are in reasonable agreement with models presented by other investigators noting that considerable variation can occur between reported masses. Results consistent with those found by other investigators show that absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of

  20. The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective.

    PubMed

    Porter, Sam; O'Halloran, Peter

    2012-03-01

    The use and limitation of realistic evaluation as a tool for evidence-based practice: a critical realist perspective In this paper, we assess realistic evaluation's articulation with evidence-based practice (EBP) from the perspective of critical realism. We argue that the adoption by realistic evaluation of a realist causal ontology means that it is better placed to explain complex healthcare interventions than the traditional method used by EBP, the randomized controlled trial (RCT). However, we do not conclude from this that the use of RCTs is without merit, arguing that it is possible to use both methods in combination under the rubric of realist theory. More negatively, we contend that the rejection of critical theory and utopianism by realistic evaluation in favour of the pragmatism of piecemeal social engineering means that it is vulnerable to accusations that it promotes technocratic interpretations of human problems. We conclude that, insofar as realistic evaluation adheres to the ontology of critical realism, it provides a sound contribution to EBP, but insofar as it rejects the critical turn of Bhaskar's realism, it replicates the technocratic tendencies inherent in EBP. PMID:22212367

  1. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  2. Probabilities and Surprises: A Realist Approach to Identifying Linguistic and Social Patterns, with Reference to an Oral History Corpus

    ERIC Educational Resources Information Center

    Sealey, Alison

    2010-01-01

    The relationship between language and identity has been explored in a number of ways in applied linguistics, and this article focuses on a particular aspect of it: self-representation in the oral history interview. People from a wide range of backgrounds, currently resident in one large city in England, were asked to reflect on their lives as part…

  3. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  4. Transition probabilities of Br II

    NASA Technical Reports Server (NTRS)

    Bengtson, R. D.; Miller, M. H.

    1976-01-01

    Absolute transition probabilities of the three most prominent visible Br II lines are measured in emission. Results compare well with Coulomb approximations and with line strengths extrapolated from trends in homologous atoms.

  5. VESPA: False positive probabilities calculator

    NASA Astrophysics Data System (ADS)

    Morton, Timothy D.

    2015-03-01

    Validation of Exoplanet Signals using a Probabilistic Algorithm (VESPA) calculates false positive probabilities and statistically validates transiting exoplanets. Written in Python, it uses isochrones [ascl:1503.010] and the package simpledist.

  6. Dinosaurs, Dinosaur Eggs, and Probability.

    ERIC Educational Resources Information Center

    Teppo, Anne R.; Hodgson, Ted

    2001-01-01

    Outlines several recommendations for teaching probability in the secondary school. Offers an activity that employs simulation by hand and using a programmable calculator in which geometry, analytical geometry, and discrete mathematics are explored. (KHR)

  7. Electrofishing capture probability of smallmouth bass in streams

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  8. Joint probabilities and quantum cognition

    SciTech Connect

    Acacio de Barros, J.

    2012-12-18

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  9. Joint probabilities and quantum cognition

    NASA Astrophysics Data System (ADS)

    de Barros, J. Acacio

    2012-12-01

    In this paper we discuss the existence of joint probability distributions for quantumlike response computations in the brain. We do so by focusing on a contextual neural-oscillator model shown to reproduce the main features of behavioral stimulus-response theory. We then exhibit a simple example of contextual random variables not having a joint probability distribution, and describe how such variables can be obtained from neural oscillators, but not from a quantum observable algebra.

  10. Effects of realistic tensor force on nuclear structure

    SciTech Connect

    Nakada, H.

    2012-10-20

    First-order tensor-force effects on nuclear structure are investigated in the self-consistent mean-field and RPA calculations with the M3Y-type semi-realistic interactions, which contain the realistic tensor force. The tensor force plays a key role in Z- or N-dependence of the shell structure, and in transitions involving spin degrees-of-freedom. It is demonstrated that the semi-realistic interactions successfully describe the N-dependence of the shell structure in the proton-magic nuclei (e.g. Ca and Sn), and the magnetic transitions (e.g. M1 transition in {sup 208}Pb).

  11. The role of ANS acuity and numeracy for the calibration and the coherence of subjective probability judgments

    PubMed Central

    Winman, Anders; Juslin, Peter; Lindskog, Marcus; Nilsson, Håkan; Kerimi, Neda

    2014-01-01

    The purpose of the study was to investigate how numeracy and acuity of the approximate number system (ANS) relate to the calibration and coherence of probability judgments. Based on the literature on number cognition, a first hypothesis was that those with lower numeracy would maintain a less linear use of the probability scale, contributing to overconfidence and nonlinear calibration curves. A second hypothesis was that also poorer acuity of the ANS would be associated with overconfidence and non-linearity. A third hypothesis, in line with dual-systems theory (e.g., Kahneman and Frederick, 2002) was that people higher in numeracy should have better access to the normative probability rules, allowing them to decrease the rate of conjunction fallacies. Data from 213 participants sampled from the Swedish population showed that: (i) in line with the first hypothesis, overconfidence and the linearity of the calibration curves were related to numeracy, where people higher in numeracy were well calibrated with zero overconfidence. (ii) ANS was not associated with overconfidence and non-linearity, disconfirming the second hypothesis. (iii) The rate of conjunction fallacies was slightly, but to a statistically significant degree decreased by numeracy, but still high at all numeracy levels. An unexpected finding was that participants with better ANS acuity gave more realistic estimates of their performance relative to others. PMID:25140163

  12. Application of ICA to realistically simulated 1H-MRS data

    PubMed Central

    Kalyanam, Ravi; Boutte, David; Hutchison, Kent E; Calhoun, Vince D

    2015-01-01

    Introduction 1H-MRS signals from brain tissues capture information on in vivo brain metabolism and neuronal biomarkers. This study aims to advance the use of independent component analysis (ICA) for spectroscopy data by objectively comparing the performance of ICA and LCModel in analyzing realistic data that mimics many of the known properties of in vivo data. Methods This work identifies key features of in vivo 1H-MRS signals and presents methods to simulate realistic data, using a basis set of 12 metabolites typically found in the human brain. The realistic simulations provide a much needed ground truth to evaluate performances of various MRS analysis methods. ICA is applied to collectively analyze multiple realistic spectra and independent components identified with our generative model to obtain ICA estimates. These same data are also analyzed using LCModel and the comparisons between the ground-truth and the analysis estimates are presented. The study also investigates the potential impact of modeling inaccuracies by incorporating two sets of model resonances in simulations. Results The simulated fid signals incorporating line broadening, noise, and residual water signal closely resemble the in vivo signals. Simulation analyses show that the resolution performances of both LCModel and ICA are not consistent across metabolites and that while ICA resolution can be improved for certain resonances, ICA is as effective as, or better than, LCModel in resolving most model resonances. Conclusion The results show that ICA can be an effective tool in comparing multiple spectra and complements existing approaches for providing quantified estimates. PMID:26221570

  13. Joint probability distributions for projection probabilities of random orthonormal states

    NASA Astrophysics Data System (ADS)

    Alonso, L.; Gorin, T.

    2016-04-01

    The quantum chaos conjecture applied to a finite dimensional quantum system implies that such a system has eigenstates that show similar statistical properties as the column vectors of random orthogonal or unitary matrices. Here, we consider the different probabilities for obtaining a specific outcome in a projective measurement, provided the system is in one of its eigenstates. We then give analytic expressions for the joint probability density for these probabilities, with respect to the ensemble of random matrices. In the case of the unitary group, our results can be applied, also, to the phenomenon of universal conductance fluctuations, where the same mathematical quantities describe partial conductances in a two-terminal mesoscopic scattering problem with a finite number of modes in each terminal.

  14. From data to probability densities without histograms

    NASA Astrophysics Data System (ADS)

    Berg, Bernd A.; Harris, Robert C.

    2008-09-01

    When one deals with data drawn from continuous variables, a histogram is often inadequate to display their probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which overcomes this problem. Error bars on the estimated probability density are calculated using a jackknife method. We give several examples and provide computer code reproducing them. You may want to look at the corresponding figures 4 to 9 first. Program summaryProgram title: cdf_to_pd Catalogue identifier: AEBC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2758 No. of bytes in distributed program, including test data, etc.: 18 594 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any capable of compiling and executing Fortran code Operating system: Any capable of compiling and executing Fortran code Classification: 4.14, 9 Nature of problem: When one deals with data drawn from continuous variables, a histogram is often inadequate to display the probability density. It deals inefficiently with statistical noise, and binsizes are free parameters. In contrast to that, the empirical cumulative distribution function (obtained after sorting the data) is parameter free. But it is a step function, so that its differentiation does not give a smooth probability density. Solution method: Based on Fourier series expansion and Kolmogorov tests, we introduce a simple method, which

  15. Toward a realistic low-field SSC lattice

    SciTech Connect

    Heifets, S.

    1985-10-01

    Three six-fold lattices for 3 T superferric SSC have been generated at TAC. The program based on the first order canonical transformation was used to compare lattices. On this basis the realistic race-track lattices were generated.

  16. Student Work Experience: A Realistic Approach to Merchandising Education.

    ERIC Educational Resources Information Center

    Horridge, Patricia; And Others

    1980-01-01

    Relevant and realistic experiences are needed to prepare the student for a future career. Addresses the results of a survey of colleges and universities in the United States in regard to their student work experience (SWE) in fashion merchandising. (Author)

  17. Estimating Controller Intervention Probabilities for Optimized Profile Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.; Erzberger, Heinz; Huynh, Phu V.

    2011-01-01

    Simulations of arrival traffic at Dallas/Fort-Worth and Denver airports were conducted to evaluate incorporating scheduling and separation constraints into advisories that define continuous descent approaches. The goal was to reduce the number of controller interventions required to ensure flights maintain minimum separation distances of 5 nmi horizontally and 1000 ft vertically. It was shown that simply incorporating arrival meter fix crossing-time constraints into the advisory generation could eliminate over half of the all predicted separation violations and more than 80% of the predicted violations between two arrival flights. Predicted separation violations between arrivals and non-arrivals were 32% of all predicted separation violations at Denver and 41% at Dallas/Fort-Worth. A probabilistic analysis of meter fix crossing-time errors is included which shows that some controller interventions will still be required even when the predicted crossing-times of the advisories are set to add a 1 or 2 nmi buffer above the minimum in-trail separation of 5 nmi. The 2 nmi buffer was shown to increase average flight delays by up to 30 sec when compared to the 1 nmi buffer, but it only resulted in a maximum decrease in average arrival throughput of one flight per hour.

  18. Realistic three-dimensional radiative transfer simulations of observed precipitation

    NASA Astrophysics Data System (ADS)

    Adams, I. S.; Bettenhausen, M. H.

    2013-12-01

    Remote sensing observations of precipitation typically utilize a number of instruments on various platforms. Ground validation campaigns incorporate ground-based and airborne measurements to characterize and study precipitating clouds, while the precipitation measurement constellation envisioned by the Global Precipitation Measurement (GPM) mission includes measurements from differing space-borne instruments. In addition to disparities such as frequency channel selection and bandwidth, measurement geometry and resolution differences between observing platforms result in inherent inconsistencies between data products. In order to harmonize measurements from multiple passive radiometers, a framework is required that addresses these differences. To accomplish this, we have implemented a flexible three-dimensional radiative transfer model. As its core, the radiative transfer model uses the Atmospheric Radiative Transfer Simulator (ARTS) version 2 to solve the radiative transfer equation in three dimensions using Monte Carlo integration. Gaseous absorption is computed with MonoRTM and formatted into look-up tables for rapid processing. Likewise, scattering properties are pre-computed using a number of publicly available codes, such as T-Matrix and DDSCAT. If necessary, a melting layer model can be applied to the input profiles. Gaussian antenna beams estimate the spatial resolutions of the passive measurements, and realistic bandpass characteristics can be included to properly account for the spectral response of the simulated instrument. This work presents three-dimensional simulations of WindSat brightness temperatures for an oceanic rain event sampled by the Tropical Rainfall Measuring Mission (TRMM) satellite. The 2B-31 combined Precipitation Radar / TRMM Microwave Imager (TMI) retrievals provide profiles that are the input to the radiative transfer model. TMI brightness temperatures are also simulated. Comparisons between monochromatic, pencil beam simulations and

  19. Snell Envelope with Small Probability Criteria

    SciTech Connect

    Del Moral, Pierre Hu, Peng; Oudjane, Nadia

    2012-12-15

    We present a new algorithm to compute the Snell envelope in the specific case where the criteria to optimize is associated with a small probability or a rare event. This new approach combines the Stochastic Mesh approach of Broadie and Glasserman with a particle approximation scheme based on a specific change of measure designed to concentrate the computational effort in regions pointed out by the criteria. The theoretical analysis of this new algorithm provides non asymptotic convergence estimates. Finally, the numerical tests confirm the practical interest of this approach.

  20. Measure and probability in cosmology

    NASA Astrophysics Data System (ADS)

    Schiffrin, Joshua S.; Wald, Robert M.

    2012-07-01

    General relativity has a Hamiltonian formulation, which formally provides a canonical (Liouville) measure on the space of solutions. In ordinary statistical physics, the Liouville measure is used to compute probabilities of macrostates, and it would seem natural to use the similar measure arising in general relativity to compute probabilities in cosmology, such as the probability that the Universe underwent an era of inflation. Indeed, a number of authors have used the restriction of this measure to the space of homogeneous and isotropic universes with scalar field matter (minisuperspace)—namely, the Gibbons-Hawking-Stewart measure—to make arguments about the likelihood of inflation. We argue here that there are at least four major difficulties with using the measure of general relativity to make probability arguments in cosmology: (1) Equilibration does not occur on cosmological length scales. (2) Even in the minisuperspace case, the measure of phase space is infinite and the computation of probabilities depends very strongly on how the infinity is regulated. (3) The inhomogeneous degrees of freedom must be taken into account (we illustrate how) even if one is interested only in universes that are very nearly homogeneous. The measure depends upon how the infinite number of degrees of freedom are truncated, and how one defines “nearly homogeneous.” (4) In a Universe where the second law of thermodynamics holds, one cannot make use of our knowledge of the present state of the Universe to retrodict the likelihood of past conditions.

  1. Interference of probabilities in dynamics

    SciTech Connect

    Zak, Michail

    2014-08-15

    A new class of dynamical systems with a preset type of interference of probabilities is introduced. It is obtained from the extension of the Madelung equation by replacing the quantum potential with a specially selected feedback from the Liouville equation. It has been proved that these systems are different from both Newtonian and quantum systems, but they can be useful for modeling spontaneous collective novelty phenomena when emerging outputs are qualitatively different from the weighted sum of individual inputs. Formation of language and fast decision-making process as potential applications of the probability interference is discussed.

  2. Probability as a Physical Motive

    NASA Astrophysics Data System (ADS)

    Martin, Peter

    2007-06-01

    Recent theoretical progress in nonequilibrium thermodynamics, linking thephysical principle of Maximum Entropy Production (“MEP”) to the information-theoretical“MaxEnt” principle of scientific inference, together with conjectures from theoreticalphysics that there may be no fundamental causal laws but only probabilities for physicalprocesses, and from evolutionary theory that biological systems expand “the adjacentpossible” as rapidly as possible, all lend credence to the proposition that probability shouldbe recognized as a fundamental physical motive. It is further proposed that spatial order andtemporal order are two aspects of the same thing, and that this is the essence of the secondlaw of thermodynamics.

  3. Knowledge typology for imprecise probabilities.

    SciTech Connect

    Wilson, G. D.; Zucker, L. J.

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  4. Uncertainty estimations for seismic source inversions

    NASA Astrophysics Data System (ADS)

    Duputel, Zacharie; Rivera, Luis; Fukahata, Yukitoshi; Kanamori, Hiroo

    2012-08-01

    Source inversion is a widely used practice in seismology. Magnitudes, moment tensors, slip distributions are now routinely calculated and disseminated whenever an earthquake occurs. The accuracy of such models depends on many aspects like the event magnitude, the data coverage and the data quality (instrument response, isolation, timing, etc.). Here, like in any observational problem, the error estimation should be part of the solution. It is however very rare to find a source inversion algorithm which includes realistic error analyses, and the solutions are often given without any estimates of uncertainties. Our goal here is to stress the importance of such estimation and to explore different techniques aimed at achieving such analyses. In this perspective, we use the W phase source inversion algorithm recently developed to provide fast CMT estimations for large earthquakes. We focus in particular on the linear-inverse problem of estimating the moment tensor components at a given source location. We assume that the initial probability densities can be modelled by Gaussian distributions. Formally, we can separate two sources of error which generally contribute to the model parameter uncertainties. The first source of uncertainty is the error introduced by the more or less imperfect data. This is carried by the covariance matrix for the data (Cd). The second source of uncertainty, often overlooked, is associated with modelling error or mismodelling. This is represented by the covariance matrix on the theory, CT. Among the different sources of mismodelling, we focus here on the modelling error associated with the mislocation of the centroid position. Both Cd and CT describe probability densities in the data space and it is well known that it is in fact CD=Cd+CT that should be included into the error propagation process. In source inversion problems, like in many other fields of geophysics, the data covariance (CD) is often considered as diagonal or even proportional

  5. The visualizable, the representable and the inconceivable: realist and non-realist mathematical models in physics and beyond.

    PubMed

    Plotnitsky, Arkady

    2016-01-13

    The project of this article is twofold. First, it aims to offer a new perspective on, and a new argument concerning, realist and non-realist mathematical models, and differences and affinities between them, using physics as a paradigmatic field of mathematical modelling in science. Most of the article is devoted to this topic. Second, the article aims to explore the implications of this argument for mathematical modelling in other fields, in particular in cognitive psychology and economics. PMID:26621990

  6. Estimating Thermoelectric Water Use

    NASA Astrophysics Data System (ADS)

    Hutson, S. S.

    2012-12-01

    In 2009, the Government Accountability Office recommended that the U.S. Geological Survey (USGS) and Department of Energy-Energy Information Administration, (DOE-EIA) jointly improve their thermoelectric water-use estimates. Since then, the annual mandatory reporting forms returned by powerplant operators to DOE-EIA have been revised twice to improve the water data. At the same time, the USGS began improving estimation of withdrawal and consumption. Because of the variation in amount and quality of water-use data across powerplants, the USGS adopted a hierarchy of methods for estimating water withdrawal and consumptive use for the approximately 1,300 water-using powerplants in the thermoelectric sector. About 800 of these powerplants have generation and cooling data, and the remaining 500 have generation data only, or sparse data. The preferred method is to accept DOE-EIA data following validation. This is the traditional USGS method and the best method if all operators follow best practices for measurement and reporting. However, in 2010, fewer than 200 powerplants reported thermodynamically realistic values of both withdrawal and consumption. Secondly, water use was estimated using linked heat and water budgets for the first group of 800 plants, and for some of the other 500 powerplants where data were sufficient for at least partial modeling using plant characteristics, electric generation, and fuel use. Thermodynamics, environmental conditions, and characteristics of the plant and cooling system constrain both the amount of heat discharged to the environment and the share of this heat that drives evaporation. Heat and water budgets were used to define reasonable estimates of withdrawal and consumption, including likely upper and lower thermodynamic limits. These results were used to validate the reported values at the 800 plants with water-use data, and reported values were replaced by budget estimates at most of these plants. Thirdly, at plants without valid

  7. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  8. The Probability Distribution of Daily Streamflow

    NASA Astrophysics Data System (ADS)

    Blum, A.; Vogel, R. M.

    2015-12-01

    Flow duration curves (FDCs) are a graphical illustration of the cumulative distribution of streamflow. Daily streamflows often range over many orders of magnitude, making it extremely challenging to find a probability distribution function (pdf) which can mimic the steady state or period of record FDC (POR-FDC). Median annual FDCs (MA-FDCs) describe the pdf of daily streamflow in a typical year. For POR- and MA-FDCs, Lmoment diagrams, visual assessments of FDCs and Quantile-Quantile probability plot correlation coefficients are used to evaluate goodness of fit (GOF) of candidate probability distributions. FDCs reveal that both four-parameter kappa (KAP) and three-parameter generalized Pareto (GP3) models result in very high GOF for the MA-FDC and a relatively lower GOF for POR-FDCs at over 500 rivers across the coterminous U.S. Physical basin characteristics, such as baseflow index as well as hydroclimatic indices such as the aridity index and the runoff ratio are found to be correlated with one of the shape parameters (kappa) of the KAP and GP3 pdfs. Our work also reveals several important areas for future research including improved parameter estimators for the KAP pdf, as well as increasing our understanding of the conditions which give rise to improved GOF of analytical pdfs to large samples of daily streamflows.

  9. Estimation of density of mongooses with capture-recapture and distance sampling

    USGS Publications Warehouse

    Corn, J.L.; Conroy, M.J.

    1998-01-01

    We captured mongooses (Herpestes javanicus) in live traps arranged in trapping webs in Antigua, West Indies, and used capture-recapture and distance sampling to estimate density. Distance estimation and program DISTANCE were used to provide estimates of density from the trapping-web data. Mean density based on trapping webs was 9.5 mongooses/ha (range, 5.9-10.2/ha); estimates had coefficients of variation ranging from 29.82-31.58% (X?? = 30.46%). Mark-recapture models were used to estimate abundance, which was converted to density using estimates of effective trap area. Tests of model assumptions provided by CAPTURE indicated pronounced heterogeneity in capture probabilities and some indication of behavioral response and variation over time. Mean estimated density was 1.80 mongooses/ha (range, 1.37-2.15/ha) with estimated coefficients of variation of 4.68-11.92% (X?? = 7.46%). Estimates of density based on mark-recapture data depended heavily on assumptions about animal home ranges; variances of densities also may be underestimated, leading to unrealistically narrow confidence intervals. Estimates based on trap webs require fewer assumptions, and estimated variances may be a more realistic representation of sampling variation. Because trap webs are established easily and provide adequate data for estimation in a few sample occasions, the method should be efficient and reliable for estimating densities of mongooses.

  10. Stretching Probability Explorations with Geoboards

    ERIC Educational Resources Information Center

    Wheeler, Ann; Champion, Joe

    2016-01-01

    Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

  11. Probability Simulation in Middle School.

    ERIC Educational Resources Information Center

    Lappan, Glenda; Winter, M. J.

    1980-01-01

    Two simulations designed to teach probability to middle-school age pupils are presented. The first simulates the one-on-one foul shot simulation in basketball; the second deals with collecting a set of six cereal box prizes by buying boxes containing one toy each. (MP)

  12. Some Surprising Probabilities from Bingo.

    ERIC Educational Resources Information Center

    Mercer, Joseph O.

    1993-01-01

    Investigates the probability of winning the largest prize at Bingo through a series of five simpler problems. Investigations are conducted with the aid of either BASIC computer programs, spreadsheets, or a computer algebra system such as Mathematica. Provides sample data tables to illustrate findings. (MDH)

  13. GPS: Geometry, Probability, and Statistics

    ERIC Educational Resources Information Center

    Field, Mike

    2012-01-01

    It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…

  14. Conditional Independence in Applied Probability.

    ERIC Educational Resources Information Center

    Pfeiffer, Paul E.

    This material assumes the user has the background provided by a good undergraduate course in applied probability. It is felt that introductory courses in calculus, linear algebra, and perhaps some differential equations should provide the requisite experience and proficiency with mathematical concepts, notation, and argument. The document is…

  15. Earthquake probabilities: theoretical assessments and reality

    NASA Astrophysics Data System (ADS)

    Kossobokov, V. G.

    2013-12-01

    It is of common knowledge that earthquakes are complex phenomena which classification and sizing remain serious problems of the contemporary seismology. In general, their frequency-magnitude distribution exhibit power law scaling. This scaling differs significantly when different time and/or space domains are considered. At the scale of a particular earthquake rupture zone the frequency of similar size events is usually estimated to be about once in several hundred years. Evidently, contemporary seismology does not possess enough reported instrumental data for any reliable quantification of an earthquake probability at a given place of expected event. Regretfully, most of the state-of-the-art theoretical approaches to assess probability of seismic events are based on trivial (e.g. Poisson, periodic, etc) or, conversely, delicately-designed (e.g. STEP, ETAS, etc) models of earthquake sequences. Some of these models are evidently erroneous, some can be rejected by the existing statistics, and some are hardly testable in our life-time. Nevertheless such probabilistic counts including seismic hazard assessment and earthquake forecasting when used on practice eventually mislead to scientifically groundless advices communicated to decision makers and inappropriate decisions. As a result, the population of seismic regions continues facing unexpected risk and losses. The international project Global Earthquake Model (GEM) is on the wrong track, if it continues to base seismic risk estimates on the standard, mainly probabilistic, methodology to assess seismic hazard. It is generally accepted that earthquakes are infrequent, low-probability events. However, they keep occurring at earthquake-prone areas with 100% certainty. Given the expectation of seismic event once per hundred years, the daily probability of occurrence on a certain date may range from 0 to 100% depending on a choice of probability space (which is yet unknown and, therefore, made by a subjective lucky chance

  16. Precision robotic control of agricultural vehicles on realistic farm trajectories

    NASA Astrophysics Data System (ADS)

    Bell, Thomas

    High-precision "autofarming", or precise agricultural vehicle guidance, is rapidly becoming a reality thanks to increasing computing power and carrier-phase differential GPS ("CPDGPS") position and attitude sensors. Realistic farm trajectories will include not only rows but also arcs created by smoothly joining rows or path-planning algorithms, spirals for farming center-pivot irrigated fields, and curved trajectories dictated by nonlinear field boundaries. In addition, fields are often sloped, and accurate control may be required either on linear trajectories or on curved contours. A three-dimensional vehicle model which adapts to changing vehicle and ground conditions was created, and a low-order model for controller synthesis was extracted based on nominal conditions. The model was extended to include a towed implement. Experimentation showed that an extended Kalman filter could identify the vehicle's state in real-time. An approximation was derived for the additional positional uncertainty introduced by the noisy "lever-arm correction" necessary to translate the GPS position measurement at the roof antenna to the vehicle's control point on the ground; this approximation was then used to support the assertion that attitude measurement accuracy was as important to control point position measurement as the original position measurement accuracy at the GPS antenna. The low-order vehicle control model was transformed to polar coordinates for control on arcs and spirals. Experimental data showed that the tractor's control, point tracked an arc to within a -0.3 cm mean and a 3.4 cm standard deviation and a spiral to within a -0.2 cm mean and a 5.3 cm standard deviation. Cubic splines were used to describe curve trajectories, and a general expression for the time-rate-of-change of curve-related parameters was derived. Four vehicle control algorithms were derived for curve tracking: linear local-error control based on linearizing the vehicle about the curve's radius of

  17. Most probable paths in temporal weighted networks: An application to ocean transport

    NASA Astrophysics Data System (ADS)

    Ser-Giacomi, Enrico; Vasile, Ruggero; Hernández-García, Emilio; López, Cristóbal

    2015-07-01

    We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.

  18. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  19. Realistic real-time outdoor rendering in augmented reality.

    PubMed

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  20. Using polychromatic X-radiography to examine realistic imitation firearms.

    PubMed

    Austin, J C; Day, C R; Kearon, A T; Valussi, S; Haycock, P W

    2008-10-25

    Sections 36-41 of the Violent Crimes Reduction Act (2006), which came into force in England and Wales on 1st October 2007, have placed significant restrictions on the sale and possession of 'realistic imitation firearms'. This legislation attempts to produce a definition of a 'realistic imitation' which clearly differentiates these items from other imitation firearms (which are not covered by the legislation). This paper will go a stage further by demonstrating techniques by which blank firing realistic imitation firearms which may be suitable for illegal conversion to fire live rounds may be differentiated from other less 'suitable' (but visually identical) realistic imitations. The article reports on the use of X-radiography, utilizing the bremsstrahlung of a commercial broad spectrum X-ray source, to identify the differences between alloys constituting the barrels of distinct replica and/or blank firing handguns. The resulting pseudo-signatures are transmission spectra over a range from 20 to 75 kV, taken at 1 kV intervals, which are extracted from stacks of registered, field flattened images. It is shown that it is possible to quantify differences between transmission spectra for components of different realistic imitation fire arms, and apply the results to determine the suitability of particular gun barrels from blank firing imitation firearms for illegal conversion to fire live rounds, or related illegal modifications. PMID:18842365