Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
ERIC Educational Resources Information Center
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
Application of the string method to the study of critical nuclei in capillary condensation.
Qiu, Chunyin; Qian, Tiezheng; Ren, Weiqing
2008-10-21
We adopt a continuum description for liquid-vapor phase transition in the framework of mean-field theory and use the string method to numerically investigate the critical nuclei for capillary condensation in a slit pore. This numerical approach allows us to determine the critical nuclei corresponding to saddle points of the grand potential function in which the chemical potential is given in the beginning. The string method locates the minimal energy path (MEP), which is the most probable transition pathway connecting two metastable/stable states in configuration space. From the MEP, the saddle point is determined and the corresponding energy barrier also obtained (for grand potential). Moreover, the MEP shows how the new phase (liquid) grows out of the old phase (vapor) along the most probable transition pathway, from the birth of a critical nucleus to its consequent expansion. Our calculations run from partial wetting to complete wetting with a variable strength of attractive wall potential. In the latter case, the string method presents a unified way for computing the critical nuclei, from film formation at solid surface to bulk condensation via liquid bridge. The present application of the string method to the numerical study of capillary condensation shows the great power of this method in evaluating the critical nuclei in various liquid-vapor phase transitions.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
Local Directed Percolation Probability in Two Dimensions
NASA Astrophysics Data System (ADS)
Inui, Norio; Konno, Norio; Komatsu, Genichi; Kameoka, Koichi
1998-01-01
Using the series expansion method and Monte Carlo simulation,we study the directed percolation probability on the square lattice Vn0=\\{ (x,y) \\in {Z}2:x+y=even, 0 ≤ y ≤ n, - y ≤ x ≤ y \\}.We calculate the local percolationprobability Pnl defined as the connection probability between theorigin and a site (0,n). The critical behavior of P∞lis clearly different from the global percolation probability P∞g characterized by a critical exponent βg.An analysis based on the Padé approximants shows βl=2βg.In addition, we find that the series expansion of P2nl can be expressed as a function of Png.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
NASA Astrophysics Data System (ADS)
Dã¡Vila, Alã¡N.; Escudero, Christian; López, Jorge, , Dr.
2004-10-01
Several methods have been developed in order to study phase transitions in nuclear fragmentation. The one used in this research is Percolation. This method allows us to adjust resulting data to heavy ion collisions experiments. In systems, such as atomic nuclei or molecules, energy is put into the system. The system's particles move away from each other until their links are broken. Some particles will still be linked. The fragments' distribution is found to be a power law. We are witnessing then a critical phenomenon. In our model the particles are represented as occupied spaces in a cubical array. Each particle has a bound to each one of its 6 neighbors. Each bound can be active if the two particles are linked or inactive if they are not. When two or more particles are linked, a fragment is formed. The probability for a specific link to be broken cannot be calculated, so the probability for a bound to be active is going to be used as parameter when trying to adjust the data. For a given probability p several arrays are generated. The fragments are counted. The fragments' distribution is then adjusted to a power law. The probability that generates the better fit is going to be the critical probability that indicates a phase transition. The better fit is found by seeking the fragments' distribution that gives the minimal chi squared when compared to a power law. As additional evidence of criticality the entropy and normalized variance of the mass are also calculated for each probability.
On Equivalence between Critical Probabilities of Dynamic Gossip Protocol and Static Site Percolation
NASA Astrophysics Data System (ADS)
Ishikawa, Tetsuya; Hayakawa, Tomohisa
The relationship between the critical probability of gossip protocol on the square lattice and the critical probability of site percolation on the square lattice is discussed. Specifically, these two critical probabilities are analytically shown to be equal to each other. Furthermore, we present a way of evaluating the critical probability of site percolation by approximating the saturation of gossip protocol. Finally, we provide numerical results which support the theoretical analysis.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Establishment probability in newly founded populations.
Gusset, Markus; Müller, Michael S; Grimm, Volker
2012-06-20
Establishment success in newly founded populations relies on reaching the established phase, which is defined by characteristic fluctuations of the population's state variables. Stochastic population models can be used to quantify the establishment probability of newly founded populations; however, so far no simple but robust method for doing so existed. To determine a critical initial number of individuals that need to be released to reach the established phase, we used a novel application of the "Wissel plot", where -ln(1 - P0(t)) is plotted against time t. This plot is based on the equation P(0)t=1-c(1)e(-ω(1t)), which relates the probability of extinction by time t, P(0)(t), to two constants: c(1) describes the probability of a newly founded population to reach the established phase, whereas ω(1) describes the population's probability of extinction per short time interval once established. For illustration, we applied the method to a previously developed stochastic population model of the endangered African wild dog (Lycaon pictus). A newly founded population reaches the established phase if the intercept of the (extrapolated) linear parts of the "Wissel plot" with the y-axis, which is -ln(c(1)), is negative. For wild dogs in our model, this is the case if a critical initial number of four packs, consisting of eight individuals each, are released. The method we present to quantify the establishment probability of newly founded populations is generic and inferences thus are transferable to other systems across the field of conservation biology. In contrast to other methods, our approach disaggregates the components of a population's viability by distinguishing establishment from persistence.
Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe
NASA Technical Reports Server (NTRS)
Isaacson, Jeffrey A.; Canizares, Claude R.
1989-01-01
Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.
Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A
2011-01-01
Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.
Predictive probability methods for interim monitoring in clinical trials with longitudinal outcomes.
Zhou, Ming; Tang, Qi; Lang, Lixin; Xing, Jun; Tatsuoka, Kay
2018-04-17
In clinical research and development, interim monitoring is critical for better decision-making and minimizing the risk of exposing patients to possible ineffective therapies. For interim futility or efficacy monitoring, predictive probability methods are widely adopted in practice. Those methods have been well studied for univariate variables. However, for longitudinal studies, predictive probability methods using univariate information from only completers may not be most efficient, and data from on-going subjects can be utilized to improve efficiency. On the other hand, leveraging information from on-going subjects could allow an interim analysis to be potentially conducted once a sufficient number of subjects reach an earlier time point. For longitudinal outcomes, we derive closed-form formulas for predictive probabilities, including Bayesian predictive probability, predictive power, and conditional power and also give closed-form solutions for predictive probability of success in a future trial and the predictive probability of success of the best dose. When predictive probabilities are used for interim monitoring, we study their distributions and discuss their analytical cutoff values or stopping boundaries that have desired operating characteristics. We show that predictive probabilities utilizing all longitudinal information are more efficient for interim monitoring than that using information from completers only. To illustrate their practical application for longitudinal data, we analyze 2 real data examples from clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.
Liu, Rui; Chen, Pei; Aihara, Kazuyuki; Chen, Luonan
2015-01-01
Identifying early-warning signals of a critical transition for a complex system is difficult, especially when the target system is constantly perturbed by big noise, which makes the traditional methods fail due to the strong fluctuations of the observed data. In this work, we show that the critical transition is not traditional state-transition but probability distribution-transition when the noise is not sufficiently small, which, however, is a ubiquitous case in real systems. We present a model-free computational method to detect the warning signals before such transitions. The key idea behind is a strategy: “making big noise smaller” by a distribution-embedding scheme, which transforms the data from the observed state-variables with big noise to their distribution-variables with small noise, and thus makes the traditional criteria effective because of the significantly reduced fluctuations. Specifically, increasing the dimension of the observed data by moment expansion that changes the system from state-dynamics to probability distribution-dynamics, we derive new data in a higher-dimensional space but with much smaller noise. Then, we develop a criterion based on the dynamical network marker (DNM) to signal the impending critical transition using the transformed higher-dimensional data. We also demonstrate the effectiveness of our method in biological, ecological and financial systems. PMID:26647650
NASA Technical Reports Server (NTRS)
Ussery, Warren; Johnson, Kenneth; Walker, James; Rummel, Ward
2008-01-01
This slide presentation reviews the use of terahertz imaging and Backscatter Radiography in a probability of detection study of the foam on the external tank (ET) shedding and damaging the shuttle orbiter. Non-destructive Examination (NDE) is performed as one method of preventing critical foam debris during the launch. Conventional NDE methods for inspection of the foam are assessed and the deficiencies are reviewed. Two methods for NDE inspection are reviewed: Backscatter Radiography (BSX) and Terahertz (THZ) Imaging. The purpose of the Probability of Detection (POD) study was to assess performance and reliability of the use of BSX and or THZ as an appropriate NDE method. The study used a test article with inserted defects, and a sample of blanks included to test for false positives. The results of the POD study are reported.
Maximum parsimony, substitution model, and probability phylogenetic trees.
Weng, J F; Thomas, D A; Mareels, I
2011-01-01
The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
A method to compute SEU fault probabilities in memory arrays with error correction
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
NASA Technical Reports Server (NTRS)
Hughitt, Brian; Generazio, Edward (Principal Investigator); Nichols, Charles; Myers, Mika (Principal Investigator); Spencer, Floyd (Principal Investigator); Waller, Jess (Principal Investigator); Wladyka, Jordan (Principal Investigator); Aldrin, John; Burke, Eric; Cerecerez, Laura;
2016-01-01
NASA-STD-5009 requires that successful flaw detection by NDE methods be statistically qualified for use on fracture critical metallic components, but does not standardize practices. This task works towards standardizing calculations and record retention with a web-based tool, the NNWG POD Standards Library or NPSL. Test methods will also be standardized with an appropriately flexible appendix to -5009 identifying best practices. Additionally, this appendix will describe how specimens used to qualify NDE systems will be cataloged, stored and protected from corrosion, damage, or loss.
Orlandini, Serena; Pasquini, Benedetta; Caprini, Claudia; Del Bubba, Massimo; Pinzauti, Sergio; Furlanetto, Sandra
2015-11-01
A fast and selective CE method for the determination of zolmitriptan (ZOL) and its five potential impurities has been developed applying the analytical Quality by Design principles. Voltage, temperature, buffer concentration, and pH were investigated as critical process parameters that can influence the critical quality attributes, represented by critical resolution values between peak pairs, analysis time, and peak efficiency of ZOL-dimer. A symmetric screening matrix was employed for investigating the knowledge space, and a Box-Behnken design was used to evaluate the main, interaction, and quadratic effects of the critical process parameters on the critical quality attributes. Contour plots were drawn highlighting important interactions between buffer concentration and pH, and the gained information was merged into the sweet spot plots. Design space (DS) was established by the combined use of response surface methodology and Monte Carlo simulations, introducing a probability concept and thus allowing the quality of the analytical performances to be assured in a defined domain. The working conditions (with the interval defining the DS) were as follows: BGE, 138 mM (115-150 mM) phosphate buffer pH 2.74 (2.54-2.94); temperature, 25°C (24-25°C); voltage, 30 kV. A control strategy was planned based on method robustness and system suitability criteria. The main advantages of applying the Quality by Design concept consisted of a great increase of knowledge of the analytical system, obtained throughout multivariate techniques, and of the achievement of analytical assurance of quality, derived by probability-based definition of DS. The developed method was finally validated and applied to the analysis of ZOL tablets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Katsios, CM; Donadini, M; Meade, M; Mehta, S; Hall, R; Granton, J; Kutsiogiannis, J; Dodek, P; Heels-Ansdell, D; McIntyre, L; Vlahakis, N; Muscedere, J; Friedrich, J; Fowler, R; Skrobik, Y; Albert, M; Cox, M; Klinger, J; Nates, J; Bersten, A; Doig, C; Zytaruk, N; Crowther, M; Cook, DJ
2014-01-01
BACKGROUND: Prediction scores for pretest probability of pulmonary embolism (PE) validated in outpatient settings are occasionally used in the intensive care unit (ICU). OBJECTIVE: To evaluate the correlation of Geneva and Wells scores with adjudicated categories of PE in ICU patients. METHODS: In a randomized trial of thromboprophylaxis, patients with suspected PE were adjudicated as possible, probable or definite PE. Data were then retrospectively abstracted for the Geneva Diagnostic PE score, Wells, Modified Wells and Simplified Wells Diagnostic scores. The chance-corrected agreement between adjudicated categories and each score was calculated. ANOVA was used to compare values across the three adjudicated PE categories. RESULTS: Among 70 patients with suspected PE, agreement was poor between adjudicated categories and Geneva pretest probabilities (kappa 0.01 [95% CI −0.0643 to 0.0941]) or Wells pretest probabilities (kappa −0.03 [95% CI −0.1462 to 0.0914]). Among four possible, 16 probable and 50 definite PEs, there were no significant differences in Geneva scores (possible = 4.0, probable = 4.7, definite = 4.5; P=0.90), Wells scores (possible = 2.8, probable = 4.9, definite = 4.1; P=0.37), Modified Wells (possible = 2.0, probable = 3.4, definite = 2.9; P=0.34) or Simplified Wells (possible = 1.8, probable = 2.8, definite = 2.4; P=0.30). CONCLUSIONS: Pretest probability scores developed outside the ICU do not correlate with adjudicated PE categories in critically ill patients. Research is needed to develop prediction scores for this population. PMID:24083302
Modeling the probability distribution of peak discharge for infiltrating hillslopes
NASA Astrophysics Data System (ADS)
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Cyber security risk assessment for SCADA and DCS networks.
Ralston, P A S; Graham, J H; Hieb, J L
2007-10-01
The growing dependence of critical infrastructures and industrial automation on interconnected physical and cyber-based control systems has resulted in a growing and previously unforeseen cyber security threat to supervisory control and data acquisition (SCADA) and distributed control systems (DCSs). It is critical that engineers and managers understand these issues and know how to locate the information they need. This paper provides a broad overview of cyber security and risk assessment for SCADA and DCS, introduces the main industry organizations and government groups working in this area, and gives a comprehensive review of the literature to date. Major concepts related to the risk assessment methods are introduced with references cited for more detail. Included are risk assessment methods such as HHM, IIM, and RFRM which have been applied successfully to SCADA systems with many interdependencies and have highlighted the need for quantifiable metrics. Presented in broad terms is probability risk analysis (PRA) which includes methods such as FTA, ETA, and FEMA. The paper concludes with a general discussion of two recent methods (one based on compromise graphs and one on augmented vulnerability trees) that quantitatively determine the probability of an attack, the impact of the attack, and the reduction in risk associated with a particular countermeasure.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
NASA Astrophysics Data System (ADS)
Ertaş, Mehmet; Keskin, Mustafa
2015-03-01
By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.
A human reliability based usability evaluation method for safety-critical software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, R. L.; Tran, T. Q.; Gertman, D. I.
2006-07-01
Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thusmore » allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)« less
Fuzzy Bayesian Network-Bow-Tie Analysis of Gas Leakage during Biomass Gasification
Yan, Fang; Xu, Kaili; Yao, Xiwen; Li, Yang
2016-01-01
Biomass gasification technology has been rapidly developed recently. But fire and poisoning accidents caused by gas leakage restrict the development and promotion of biomass gasification. Therefore, probabilistic safety assessment (PSA) is necessary for biomass gasification system. Subsequently, Bayesian network-bow-tie (BN-bow-tie) analysis was proposed by mapping bow-tie analysis into Bayesian network (BN). Causes of gas leakage and the accidents triggered by gas leakage can be obtained by bow-tie analysis, and BN was used to confirm the critical nodes of accidents by introducing corresponding three importance measures. Meanwhile, certain occurrence probability of failure was needed in PSA. In view of the insufficient failure data of biomass gasification, the occurrence probability of failure which cannot be obtained from standard reliability data sources was confirmed by fuzzy methods based on expert judgment. An improved approach considered expert weighting to aggregate fuzzy numbers included triangular and trapezoidal numbers was proposed, and the occurrence probability of failure was obtained. Finally, safety measures were indicated based on the obtained critical nodes. The theoretical occurrence probabilities in one year of gas leakage and the accidents caused by it were reduced to 1/10.3 of the original values by these safety measures. PMID:27463975
Grigore, Bogdan; Peters, Jaime; Hyde, Christopher; Stein, Ken
2013-11-01
Elicitation is a technique that can be used to obtain probability distribution from experts about unknown quantities. We conducted a methodology review of reports where probability distributions had been elicited from experts to be used in model-based health technology assessments. Databases including MEDLINE, EMBASE and the CRD database were searched from inception to April 2013. Reference lists were checked and citation mapping was also used. Studies describing their approach to the elicitation of probability distributions were included. Data was abstracted on pre-defined aspects of the elicitation technique. Reports were critically appraised on their consideration of the validity, reliability and feasibility of the elicitation exercise. Fourteen articles were included. Across these studies, the most marked features were heterogeneity in elicitation approach and failure to report key aspects of the elicitation method. The most frequently used approaches to elicitation were the histogram technique and the bisection method. Only three papers explicitly considered the validity, reliability and feasibility of the elicitation exercises. Judged by the studies identified in the review, reports of expert elicitation are insufficient in detail and this impacts on the perceived usability of expert-elicited probability distributions. In this context, the wider credibility of elicitation will only be improved by better reporting and greater standardisation of approach. Until then, the advantage of eliciting probability distributions from experts may be lost.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.
2014-01-01
Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.
NASA Astrophysics Data System (ADS)
Petru, Jan; Dolezel, Jiri; Krivda, Vladislav
2017-09-01
In the past the excessive and oversized loads were realized on selected routes on roads that were adapted to ensure smooth passage of transport. Over the years, keeping the passages was abandoned and currently there are no earmarked routes which would be adapted for such type of transportation. The routes of excessive and oversized loads are currently planned to ensure passage of the vehicle through the critical points on the roads. Critical points are level and fly-over crossings of roads, bridges, toll gates, traffic signs and electrical and other lines. The article deals with the probability assessment of selected critical points of the route of the excessive load on the roads of 1st class, in relation to ensuring the passage through the roundabout. The bases for assessing the passage of the vehicle with excessive load through a roundabout are long-term results of video analyses of monitoring the movement of transports on similar intersections and determination of the theoretical probability model of vehicle movement at selected junctions. On the basis of a virtual simulation of the vehicle movement at crossroads and using MonteCarlo simulation method vehicles’ paths are analysed and the probability of exit of the vehicle outside the crossroad in given junctions is quantified.
Writing for Change: Engaging Juveniles through Alternative Literacy Education
ERIC Educational Resources Information Center
Jacobi, Tobi
2008-01-01
Research on incarceration and educational access continues to reveal the stark reality for many adjudicated youth: without access to educational opportunities recidivism is probable. Yet conventional methods of teaching critical reading, writing, and thinking skills are not always successful for juveniles who have found little success (or hope) in…
Adjoint Fokker-Planck equation and runaway electron dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chang; Brennan, Dylan P.; Bhattacharjee, Amitava
2016-01-15
The adjoint Fokker-Planck equation method is applied to study the runaway probability function and the expected slowing-down time for highly relativistic runaway electrons, including the loss of energy due to synchrotron radiation. In direct correspondence to Monte Carlo simulation methods, the runaway probability function has a smooth transition across the runaway separatrix, which can be attributed to effect of the pitch angle scattering term in the kinetic equation. However, for the same numerical accuracy, the adjoint method is more efficient than the Monte Carlo method. The expected slowing-down time gives a novel method to estimate the runaway current decay timemore » in experiments. A new result from this work is that the decay rate of high energy electrons is very slow when E is close to the critical electric field. This effect contributes further to a hysteresis previously found in the runaway electron population.« less
Predicting the probability of slip in gait: methodology and distribution study.
Gragg, Jared; Yang, James
2016-01-01
The likelihood of a slip is related to the available and required friction for a certain activity, here gait. Classical slip and fall analysis presumed that a walking surface was safe if the difference between the mean available and required friction coefficients exceeded a certain threshold. Previous research was dedicated to reformulating the classical slip and fall theory to include the stochastic variation of the available and required friction when predicting the probability of slip in gait. However, when predicting the probability of a slip, previous researchers have either ignored the variation in the required friction or assumed the available and required friction to be normally distributed. Also, there are no published results that actually give the probability of slip for various combinations of required and available frictions. This study proposes a modification to the equation for predicting the probability of slip, reducing the previous equation from a double-integral to a more convenient single-integral form. Also, a simple numerical integration technique is provided to predict the probability of slip in gait: the trapezoidal method. The effect of the random variable distributions on the probability of slip is also studied. It is shown that both the required and available friction distributions cannot automatically be assumed as being normally distributed. The proposed methods allow for any combination of distributions for the available and required friction, and numerical results are compared to analytical solutions for an error analysis. The trapezoidal method is shown to be highly accurate and efficient. The probability of slip is also shown to be sensitive to the input distributions of the required and available friction. Lastly, a critical value for the probability of slip is proposed based on the number of steps taken by an average person in a single day.
How Inhomogeneous Site Percolation Works on Bethe Lattices: Theory and Application
NASA Astrophysics Data System (ADS)
Ren, Jingli; Zhang, Liying; Siegmund, Stefan
2016-03-01
Inhomogeneous percolation, for its closer relationship with real-life, can be more useful and reasonable than homogeneous percolation to illustrate the critical phenomena and dynamical behaviour of complex networks. However, due to its intricacy, the theoretical framework of inhomogeneous percolation is far from being complete and many challenging problems are still open. In this paper, we first investigate inhomogeneous site percolation on Bethe Lattices with two occupation probabilities, and then extend the result to percolation with m occupation probabilities. The critical behaviour of this inhomogeneous percolation is shown clearly by formulating the percolation probability with given occupation probability p, the critical occupation probability , and the average cluster size where p is subject to . Moreover, using the above theory, we discuss in detail the diffusion behaviour of an infectious disease (SARS) and present specific disease-control strategies in consideration of groups with different infection probabilities.
A quantile-based Time at Risk: A new approach for assessing risk in financial markets
NASA Astrophysics Data System (ADS)
Bolgorian, Meysam; Raei, Reza
2013-11-01
In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.
Moro, Marilyn; Westover, M. Brandon; Kelly, Jessica; Bianchi, Matt T.
2016-01-01
Study Objectives: Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. Methods: We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. Results: The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Conclusion: Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. Citation: Moro M, Westover MB, Kelly J, Bianchi MT. Decision modeling in sleep apnea: the critical roles of pretest probability, cost of untreated obstructive sleep apnea, and time horizon. J Clin Sleep Med 2016;12(3):409–418. PMID:26518699
Stochastic mechanics of loose boundary particle transport in turbulent flow
NASA Astrophysics Data System (ADS)
Dey, Subhasish; Ali, Sk Zeeshan
2017-05-01
In a turbulent wall shear flow, we explore, for the first time, the stochastic mechanics of loose boundary particle transport, having variable particle protrusions due to various cohesionless particle packing densities. The mean transport probabilities in contact and detachment modes are obtained. The mean transport probabilities in these modes as a function of Shields number (nondimensional fluid induced shear stress at the boundary) for different relative particle sizes (ratio of boundary roughness height to target particle diameter) and shear Reynolds numbers (ratio of fluid inertia to viscous damping) are presented. The transport probability in contact mode increases with an increase in Shields number attaining a peak and then decreases, while that in detachment mode increases monotonically. For the hydraulically transitional and rough flow regimes, the transport probability curves in contact mode for a given relative particle size of greater than or equal to unity attain their peaks corresponding to the averaged critical Shields numbers, from where the transport probability curves in detachment mode initiate. At an inception of particle transport, the mean probabilities in both the modes increase feebly with an increase in shear Reynolds number. Further, for a given particle size, the mean probability in contact mode increases with a decrease in critical Shields number attaining a critical value and then increases. However, the mean probability in detachment mode increases with a decrease in critical Shields number.
Adjoint method and runaway electron avalanche
Liu, Chang; Brennan, Dylan P.; Boozer, Allen H.; ...
2016-12-16
The adjoint method for the study of runaway electron dynamics in momentum space Liu et al (2016 Phys. Plasmas 23 010702) is rederived using the Green's function method, for both the runaway probability function (RPF) and the expected loss time (ELT). The RPF and ELT obtained using the adjoint method are presented, both with and without the synchrotron radiation reaction force. In conclusion, the adjoint method is then applied to study the runaway electron avalanche. Both the critical electric field and the growth rate for the avalanche are calculated using this fast and novel approach.
NASA Astrophysics Data System (ADS)
Merdan, Ziya; Karakuş, Özlem
2016-11-01
The six dimensional Ising model with nearest-neighbor pair interactions has been simulated and verified numerically on the Creutz Cellular Automaton by using five bit demons near the infinite-lattice critical temperature with the linear dimensions L=4,6,8,10. The order parameter probability distribution for six dimensional Ising model has been calculated at the critical temperature. The constants of the analytical function have been estimated by fitting to probability function obtained numerically at the finite size critical point.
Probability Forecasting Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Duncan, M.; Frisbee, J.; Wysack, J.
2014-09-01
Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.
Dynamic Sensor Tasking for Space Situational Awareness via Reinforcement Learning
NASA Astrophysics Data System (ADS)
Linares, R.; Furfaro, R.
2016-09-01
This paper studies the Sensor Management (SM) problem for optical Space Object (SO) tracking. The tasking problem is formulated as a Markov Decision Process (MDP) and solved using Reinforcement Learning (RL). The RL problem is solved using the actor-critic policy gradient approach. The actor provides a policy which is random over actions and given by a parametric probability density function (pdf). The critic evaluates the policy by calculating the estimated total reward or the value function for the problem. The parameters of the policy action pdf are optimized using gradients with respect to the reward function. Both the critic and the actor are modeled using deep neural networks (multi-layer neural networks). The policy neural network takes the current state as input and outputs probabilities for each possible action. This policy is random, and can be evaluated by sampling random actions using the probabilities determined by the policy neural network's outputs. The critic approximates the total reward using a neural network. The estimated total reward is used to approximate the gradient of the policy network with respect to the network parameters. This approach is used to find the non-myopic optimal policy for tasking optical sensors to estimate SO orbits. The reward function is based on reducing the uncertainty for the overall catalog to below a user specified uncertainty threshold. This work uses a 30 km total position error for the uncertainty threshold. This work provides the RL method with a negative reward as long as any SO has a total position error above the uncertainty threshold. This penalizes policies that take longer to achieve the desired accuracy. A positive reward is provided when all SOs are below the catalog uncertainty threshold. An optimal policy is sought that takes actions to achieve the desired catalog uncertainty in minimum time. This work trains the policy in simulation by letting it task a single sensor to "learn" from its performance. The proposed approach for the SM problem is tested in simulation and good performance is found using the actor-critic policy gradient method.
Probabilistic DHP adaptive critic for nonlinear stochastic control systems.
Herzallah, Randa
2013-06-01
Following the recently developed algorithms for fully probabilistic control design for general dynamic stochastic systems (Herzallah & Káarnáy, 2011; Kárný, 1996), this paper presents the solution to the probabilistic dual heuristic programming (DHP) adaptive critic method (Herzallah & Káarnáy, 2011) and randomized control algorithm for stochastic nonlinear dynamical systems. The purpose of the randomized control input design is to make the joint probability density function of the closed loop system as close as possible to a predetermined ideal joint probability density function. This paper completes the previous work (Herzallah & Káarnáy, 2011; Kárný, 1996) by formulating and solving the fully probabilistic control design problem on the more general case of nonlinear stochastic discrete time systems. A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained. Copyright © 2013 Elsevier Ltd. All rights reserved.
New designs and characterization techniques for thin-film solar cells
NASA Astrophysics Data System (ADS)
Pang, Yutong
This thesis presents a fundamentally new thin-film photovoltaic design and develops several novel characterization techniques that improve the accuracy of thin-film solar cell computational models by improving the accuracy of the input data. We first demonstrate a novel organic photovoltaic (OPV) design, termed a "Slot OPV", in which the active layer is less than 50 nm; We apply the principles of slot waveguides to confine light within the active layer. According to our calculation, the guided-mode absorption for a 10nm thick active layer equal to the absorption of normal incidence on an OPV with a 100nm thick active layer. These results, together with the expected improvement in charge extraction for ultrathin layers, suggest that slot OPVs can be designed with greater power conversion efficiency than today's state-of-art OPV architectures if practical challenges, such as the efficient coupling of light into these modes, can be overcome. The charge collection probability, i.e. the probability that charges generated by absorption of a photon are successfully collected as current, is a critical feature for all kinds of solar cells. While the electron-beam-induced current (EBIC) method has been used in the past to successfully reconstruct the charge collection probability, this approach is destructive and requires time-consuming sample preparation. We demonstrate a new nondestructive optoelectronic method to reconstruct the charge collection probability by analyzing the internal quantum efficiency (IQE) data that are measured on copper indium gallium diselenide (CIGS) thin-film solar cells. We further improve the method with a parameter-independent regularization approach. Then we introduce the Self-Constrained Ill-Posed Inverse Problem (SCIIP) method, which improves the signal-to-noise of the solution by using the regularization method with system constraints and optimization via an evolutionary algorithm. For a thin-film solar cell optical model to be an accurate representation of reality, the measured refractive index profile of the solar cell used as input to the model must also be accurate. We describe a new method for reconstructing the depth-dependent refractive-index profile with high spatial resolution in thin photoactive layers. This novel technique applies to any thin film, including the photoactive layers of a broad range of thin-film photovoltaics. Together, these methods help us improve the measurement accuracy of the depth profile within thin-film photovoltaics for optical and electronic properties such as refractive index and charge collection probability, which is critical to the understanding, modeling, and optimization of these devices.
Developing a probability-based model of aquifer vulnerability in an agricultural region
NASA Astrophysics Data System (ADS)
Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei
2013-04-01
SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks
NASA Astrophysics Data System (ADS)
Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo
2014-12-01
Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.
Lifetimes in Te 124 : Examining critical-point symmetry in the Te nuclei
Hicks, S. F.; Vanhoy, J. R.; Burkett, P. G.; ...
2017-03-27
The Doppler-shift attenuation method following inelastic neutron scattering was used to determine the lifetimes of nuclear levels to 3.3-MeV excitation in 124Te. Level energies and spins, γ -ray energies and branching ratios, and multipole-mixing ratios were deduced from measured γ-ray angular distributions at incident neutron energies of 2.40 and 3.30 MeV, γ-ray excitation functions, and γγ coincidence measurements. The newly obtained reduced transition probabilities and level energies for 124Te were compared to critical-point symmetry model predictions. The E(5) and β 4 potential critical-point symmetries were also investigated in 122Te and 126Te.
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Alexakis, Alexandros
2018-02-01
The fluctuations of turbulence intensity in a pipe flow around the critical Reynolds number is difficult to study but important because they are related to turbulent-laminar transitions. We here propose a rare-event sampling method to study such fluctuations in order to measure the time scale of the transition efficiently. The method is composed of two parts: (i) the measurement of typical fluctuations (the bulk part of an accumulative probability function) and (ii) the measurement of rare fluctuations (the tail part of the probability function) by employing dynamics where a feedback control of the Reynolds number is implemented. We apply this method to a chaotic model of turbulent puffs proposed by Barkley and confirm that the time scale of turbulence decay increases super exponentially even for high Reynolds numbers up to Re =2500 , where getting enough statistics by brute-force calculations is difficult. The method uses a simple procedure of changing Reynolds number that can be applied even to experiments.
On the validity of Freud's dream interpretations.
Michael, Michael
2008-03-01
In this article I defend Freud's method of dream interpretation against those who criticize it as involving a fallacy-namely, the reverse causal fallacy-and those who criticize it as permitting many interpretations, indeed any that the interpreter wants to put on the dream. The first criticism misconstrues the logic of the interpretative process: it does not involve an unjustified reversal of causal relations, but rather a legitimate attempt at an inference to the best explanation. The judgement of whether or not a particular interpretation is the best explanation depends on the details of the case in question. I outline the kinds of probabilities involved in making the judgement. My account also helps to cash out the metaphors of the jigsaw and crossword puzzles that Freudians have used in response to the 'many interpretations' objection. However, in defending Freud's method of dream interpretation, I do not thereby defend his theory of dreams, which cannot be justified by his interpretations alone.
NASA Technical Reports Server (NTRS)
Mckenzie, R. L.
1974-01-01
The semiclassical approximation is applied to anharmonic diatomic oscillators in excited initial states. Multistate numerical solutions giving the vibrational transition probabilities for collinear collisions with an inert atom are compared with equivalent, exact quantum-mechanical calculations. Several symmetrization methods are shown to correlate accurately the predictions of both theories for all initial states, transitions, and molecular types tested, but only if coupling of the oscillator motion and the classical trajectory of the incident particle is considered. In anharmonic heteronuclear molecules, the customary semiclassical method of computing the classical trajectory independently leads to transition probabilities with anomalous low-energy resonances. Proper accounting of the effects of oscillator compression and recoil on the incident particle trajectory removes the anomalies and restores the applicability of the semiclassical approximation.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education
Masel, J.; Humphrey, P. T.; Blackburn, B.; Levine, J. A.
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students’ intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes’ theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. PMID:26582236
The Heuristic Value of p in Inductive Statistical Inference
Krueger, Joachim I.; Heck, Patrick R.
2017-01-01
Many statistical methods yield the probability of the observed data – or data more extreme – under the assumption that a particular hypothesis is true. This probability is commonly known as ‘the’ p-value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p-value has been subjected to much speculation, analysis, and criticism. We explore how well the p-value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p-value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p-value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say. PMID:28649206
The Heuristic Value of p in Inductive Statistical Inference.
Krueger, Joachim I; Heck, Patrick R
2017-01-01
Many statistical methods yield the probability of the observed data - or data more extreme - under the assumption that a particular hypothesis is true. This probability is commonly known as 'the' p -value. (Null Hypothesis) Significance Testing ([NH]ST) is the most prominent of these methods. The p -value has been subjected to much speculation, analysis, and criticism. We explore how well the p -value predicts what researchers presumably seek: the probability of the hypothesis being true given the evidence, and the probability of reproducing significant results. We also explore the effect of sample size on inferential accuracy, bias, and error. In a series of simulation experiments, we find that the p -value performs quite well as a heuristic cue in inductive inference, although there are identifiable limits to its usefulness. We conclude that despite its general usefulness, the p -value cannot bear the full burden of inductive inference; it is but one of several heuristic cues available to the data analyst. Depending on the inferential challenge at hand, investigators may supplement their reports with effect size estimates, Bayes factors, or other suitable statistics, to communicate what they think the data say.
NASA Technical Reports Server (NTRS)
Cruse, T. A.
1987-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.
1988-01-01
The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.
Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells
NASA Technical Reports Server (NTRS)
Arbocz, Johann; Hilburger, Mark W.
2003-01-01
A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Probabilistic models for neural populations that naturally capture global coupling and criticality
2017-01-01
Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system’s state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality. PMID:28926564
NASA Astrophysics Data System (ADS)
Batac, Rene C.; Paguirigan, Antonino A., Jr.; Tarun, Anjali B.; Longjas, Anthony G.
2017-04-01
We propose a cellular automata model for earthquake occurrences patterned after the sandpile model of self-organized criticality (SOC). By incorporating a single parameter describing the probability to target the most susceptible site, the model successfully reproduces the statistical signatures of seismicity. The energy distributions closely follow power-law probability density functions (PDFs) with a scaling exponent of around -1. 6, consistent with the expectations of the Gutenberg-Richter (GR) law, for a wide range of the targeted triggering probability values. Additionally, for targeted triggering probabilities within the range 0.004-0.007, we observe spatiotemporal distributions that show bimodal behavior, which is not observed previously for the original sandpile. For this critical range of values for the probability, model statistics show remarkable comparison with long-period empirical data from earthquakes from different seismogenic regions. The proposed model has key advantages, the foremost of which is the fact that it simultaneously captures the energy, space, and time statistics of earthquakes by just introducing a single parameter, while introducing minimal parameters in the simple rules of the sandpile. We believe that the critical targeting probability parameterizes the memory that is inherently present in earthquake-generating regions.
NASA Astrophysics Data System (ADS)
Rong, Ying; Wen, Huiying
2018-05-01
In this paper, the appearing probability of truck is introduced and an extended car-following model is presented to analyze the traffic flow based on the consideration of driver's characteristics, under honk environment. The stability condition of this proposed model is obtained through linear stability analysis. In order to study the evolution properties of traffic wave near the critical point, the mKdV equation is derived by the reductive perturbation method. The results show that the traffic flow will become more disorder for the larger appearing probability of truck. Besides, the appearance of leading truck affects not only the stability of traffic flow, but also the effect of other aspects on traffic flow, such as: driver's reaction and honk effect. The effects of them on traffic flow are closely correlated with the appearing probability of truck. Finally, the numerical simulations under the periodic boundary condition are carried out to verify the proposed model. And they are consistent with the theoretical findings.
Multifractality and Network Analysis of Phase Transition
Li, Wei; Yang, Chunbin; Han, Jihui; Su, Zhu; Zou, Yijiang
2017-01-01
Many models and real complex systems possess critical thresholds at which the systems shift dramatically from one sate to another. The discovery of early-warnings in the vicinity of critical points are of great importance to estimate how far the systems are away from the critical states. Multifractal Detrended Fluctuation analysis (MF-DFA) and visibility graph method have been employed to investigate the multifractal and geometrical properties of the magnetization time series of the two-dimensional Ising model. Multifractality of the time series near the critical point has been uncovered from the generalized Hurst exponents and singularity spectrum. Both long-term correlation and broad probability density function are identified to be the sources of multifractality. Heterogeneous nature of the networks constructed from magnetization time series have validated the fractal properties. Evolution of the topological quantities of the visibility graph, along with the variation of multifractality, serve as new early-warnings of phase transition. Those methods and results may provide new insights about the analysis of phase transition problems and can be used as early-warnings for a variety of complex systems. PMID:28107414
NASA Astrophysics Data System (ADS)
Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.
2018-03-01
Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.
Survival of Near-Critical Branching Brownian Motion
NASA Astrophysics Data System (ADS)
Berestycki, Julien; Berestycki, Nathanaël; Schweinsberg, Jason
2011-06-01
Consider a system of particles performing branching Brownian motion with negative drift μ= sqrt{2 - \\varepsilon} and killed upon hitting zero. Initially there is one particle at x>0. Kesten (Stoch. Process. Appl. 7:9-47, 1978) showed that the process survives with positive probability if and only if ɛ>0. Here we are interested in the asymptotics as ɛ→0 of the survival probability Q μ ( x). It is proved that if L=π/sqrt{\\varepsilon} then for all x∈ℝ, lim ɛ→0 Q μ ( L+ x)= θ( x)∈(0,1) exists and is a traveling wave solution of the Fisher-KPP equation. Furthermore, we obtain sharp asymptotics of the survival probability when x< L and L- x→∞. The proofs rely on probabilistic methods developed by the authors in (Berestycki et al. in arXiv: 1001.2337, 2010). This completes earlier work by Harris, Harris and Kyprianou (Ann. Inst. Henri Poincaré Probab. Stat. 42:125-145, 2006) and confirms predictions made by Derrida and Simon (Europhys. Lett. 78:60006, 2007), which were obtained using nonrigorous PDE methods.
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
Resource-efficient generation of linear cluster states by linear optics with postselection
Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; ...
2015-01-30
Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneousmore » detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2 n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4 m-1.« less
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
Formation of ZnS nanostructures by a simple way of thermal evaporation
NASA Astrophysics Data System (ADS)
Yuan, H. J.; Xie, S. S.; Liu, D. F.; Yan, X. Q.; Zhou, Z. P.; Ci, L. J.; Wang, J. X.; Gao, Y.; Song, L.; Liu, L. F.; Zhou, W. Y.; Wang, G.
2003-11-01
The mass synthesis of ZnS nanobelts, nanowires, and nanoparticles has been achieved by a simple method of thermal evaporation of ZnS powders onto silicon substrates in the presence of Au catalyst. The temperature of the substrates and the concentration of ZnS vapor were the critical experimental parameters for the formation of different morphologies of ZnS nanostructures. Scanning electron microscopy and transmission electron microscopy show that the diameters of as-prepared nanowires were 30-70 nm. The UV emission at 374 nm is probably related to the exciton emission, while the mechanism of blue emission at 443 nm is probably mainly due to the presence of various surface states.
NASA Astrophysics Data System (ADS)
Ding, Jian; Li, Li
2018-05-01
We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.
NASA Astrophysics Data System (ADS)
Ding, Jian; Li, Li
2018-06-01
We initiate the study on chemical distances of percolation clusters for level sets of two-dimensional discrete Gaussian free fields as well as loop clusters generated by two-dimensional random walk loop soups. One of our results states that the chemical distance between two macroscopic annuli away from the boundary for the random walk loop soup at the critical intensity is of dimension 1 with positive probability. Our proof method is based on an interesting combination of a theorem of Makarov, isomorphism theory, and an entropic repulsion estimate for Gaussian free fields in the presence of a hard wall.
The stability of portfolio investment in stock crashes
NASA Astrophysics Data System (ADS)
Li, Yun-Xian; Qian, Zhen-Wei; Li, Jiang-Cheng; Tang, Nian-Sheng; Mei, Dong-Cheng
2016-08-01
The stability of portfolio investment in stock market crashes with Markowitz portfolio is investigated by the method of theoretical and empirical simulation. From numerical simulation of the mean escape time (MET), we conclude that: (i) The increasing number (Np) of stocks in Markowitz portfolio induces a maximum in the curve of MET versus the initial position; (ii) A critical value of Np in the behavior of MET versus the long-run variance or amplitude of volatility fluctuations maximumlly enhances the stability of portfolio investment. When Np takes value below the critical value, the increasing Np enhances the stability of portfolio investment, but restrains it when Np takes value above the critical value. In addition, a good agreement of both the MET and probability density functions of returns is found between real data and theoretical results.
A new analysis of the effects of the Asian crisis of 1997 on emergent markets
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Liu, Y.
2007-07-01
This work is devoted to the study of the Asian crisis of 1997, and its consequences on emerging markets. We have done so by means of a phase transition model. We have analyzed the crashes on leading indices of Hong Kong (HSI), Turkey (XU100), Mexico (MMX), Brazil (BOVESPA) and Argentina (MERVAL). We were able to obtain optimum values for the critical date, corresponding to the most probable date of the crash. The estimation of the critical date was excellent except for the MERVAL index; this improvement is due to a previous analysis of the parameters involved. We only used data from before the true crash date in order to obtain the predicted critical date. This article's conclusions are largely obtained via ad hoc empirical methods.
A Probabilistic, Facility-Centric Approach to Lightning Strike Location
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.
2012-01-01
A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
Moreau, Gaétan; Michaud, J-P
2017-01-01
LaMotte and Wells re-analyzed and criticized one of our articles in which we proposed a novel statistical test for predicting postmortem interval from insect succession data. Using simple mathematical examples, we demonstrate that LaMotte and Wells erred because their analyses are based on an erroneous interpretation of the nature of probabilities that disregards more than 300 years of scientific literature on probability combination. We also argue that the methods presented in our article, more specifically the use of degree-day-based logistic regression analysis to model succession, was a positive contribution to the fields of forensic entomology and carrion ecology, which LaMotte and Wells forgot to mention by instead focusing on issues that were either trivial or did not exist. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Review of Literature on Probability of Detection for Liquid Penetrant Nondestructive Testing
2011-11-01
increased maintenance costs , or catastrophic failure of safety- critical structure. Knowledge of the reliability achieved by NDT methods, including...representative components to gather data for statistical analysis, which can be prohibitively expensive. To account for sampling variability inherent in any...Sioux City and Pensacola. (Those recommendations were discussed in Section 3.4.) Drury et al report on a factorial experiment aimed at identifying the
Conceptual, Methodological, and Ethical Problems in Communicating Uncertainty in Clinical Evidence
Han, Paul K. J.
2014-01-01
The communication of uncertainty in clinical evidence is an important endeavor that poses difficult conceptual, methodological, and ethical problems. Conceptual problems include logical paradoxes in the meaning of probability and “ambiguity”— second-order uncertainty arising from the lack of reliability, credibility, or adequacy of probability information. Methodological problems include questions about optimal methods for representing fundamental uncertainties and for communicating these uncertainties in clinical practice. Ethical problems include questions about whether communicating uncertainty enhances or diminishes patient autonomy and produces net benefits or harms. This article reviews the limited but growing literature on these problems and efforts to address them and identifies key areas of focus for future research. It is argued that the critical need moving forward is for greater conceptual clarity and consistent representational methods that make the meaning of various uncertainties understandable, and for clinical interventions to support patients in coping with uncertainty in decision making. PMID:23132891
Using the case study teaching method to promote college students' critical thinking skills
NASA Astrophysics Data System (ADS)
Terry, David Richard
2007-12-01
The purpose of this study was to examine general and domain-specific critical thinking skills in college students, particularly ways in which these skills might be increased through the use of the case study method of teaching. General critical thinking skills were measured using the Watson-Glaser Critical Thinking Appraisal (WGCTA) Short Form, a forty-item paper-and-pencil test designed to measure important abilities involved in critical thinking, including inference, recognition of assumptions, deduction, interpretation, and evaluation of arguments. The ability to identify claims and support those claims with evidence is also an important aspect of critical thinking. I developed a new instrument, the Claim and Evidence Assessment Tool (CEAT), to measure these skills in a domain-specific manner. Forty undergraduate students in a general science course for non-science majors at a small two-year college in the northeastern United States experienced positive changes in general critical thinking according to results obtained using the Watson-Glaser Critical Thinking Appraisal (WGCTA). In addition, the students showed cumulative improvement in their ability to identify claims and evidence, as measured by the Claim and Evidence Assessment Tool (CEAT). Mean score on the WGCTA improved from 22.15 +/- 4.59 to 23.48 +/- 4.24 (out of 40), and the mean CEAT score increased from 14.98 +/- 3.28 to 16.20 +/- 3.08 (out of 24). These increases were modest but statistically and educationally significant. No differences in claim and evidence identification were found between students who learned about specific biology topics using the case study method of instruction and those who were engaged in more traditional instruction, and the students' ability to identify claims and evidence and their factual knowledge showed little if any correlation. The results of this research were inconclusive regarding whether or not the case study teaching method promotes college students' general or domain-specific critical thinking skills, and future research addressing this issue should probably utilize larger sample sizes and a pretest-posttest randomized experimental design.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
Critical spreading dynamics of parity conserving annihilating random walks with power-law branching
NASA Astrophysics Data System (ADS)
Laise, T.; dos Anjos, F. C.; Argolo, C.; Lyra, M. L.
2018-09-01
We investigate the critical spreading of the parity conserving annihilating random walks model with Lévy-like branching. The random walks are considered to perform normal diffusion with probability p on the sites of a one-dimensional lattice, annihilating in pairs by contact. With probability 1 - p, each particle can also produce two offspring which are placed at a distance r from the original site following a power-law Lévy-like distribution P(r) ∝ 1 /rα. We perform numerical simulations starting from a single particle. A finite-time scaling analysis is employed to locate the critical diffusion probability pc below which a finite density of particles is developed in the long-time limit. Further, we estimate the spreading dynamical exponents related to the increase of the average number of particles at the critical point and its respective fluctuations. The critical exponents deviate from those of the counterpart model with short-range branching for small values of α. The numerical data suggest that continuously varying spreading exponents sets up while the branching process still results in a diffusive-like spreading.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Nucleation Rate Analysis of Methane Hydrate from Molecular Dynamics Simulations
Yuhara, Daisuke; Barnes, Brian C.; Suh, Donguk; ...
2015-01-06
Clathrate hydrates are solid crystalline structures most commonly formed from solutions that have nucleated to form a mixed solid composed of water and gas. Understanding the mechanism of clathrate hydrate nucleation is essential to grasp the fundamental chemistry of these complex structures and their applications. Molecular dynamics (MD) simulation is an ideal method to study nucleation at the molecular level because the size of the critical nucleus and formation rate occur on the nano scale. Moreover, various analysis methods for nucleation have been developed through MD to analyze nucleation. In particular, the mean first-passage time (MFPT) and survival probability (SP)more » methods have proven to be effective in procuring the nucleation rate and critical nucleus size for monatomic systems. This study assesses the MFPT and SP methods, previously used for monatomic systems, when applied to analyzing clathrate hydrate nucleation. Because clathrate hydrate nucleation is relatively difficult to observe in MD simulations (due to its high free energy barrier), these methods have yet to be applied to clathrate hydrate systems. In this study, we have analyzed the nucleation rate and critical nucleus size of methane hydrate using MFPT and SP methods from data generated by MD simulations at 255 K and 50 MPa. MFPT was modified for clathrate hydrate from the original version by adding the maximum likelihood estimate and growth effect term. The nucleation rates were calculated by MFPT and SP methods and are within 5%; the critical nucleus size estimated by the MFPT method was 50% higher, than values obtained through other more rigorous but computationally expensive estimates. These methods can also be extended to the analysis of other clathrate hydrates.« less
Nucleation Rate Analysis of Methane Hydrate from Molecular Dynamics Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuhara, Daisuke; Barnes, Brian C.; Suh, Donguk
Clathrate hydrates are solid crystalline structures most commonly formed from solutions that have nucleated to form a mixed solid composed of water and gas. Understanding the mechanism of clathrate hydrate nucleation is essential to grasp the fundamental chemistry of these complex structures and their applications. Molecular dynamics (MD) simulation is an ideal method to study nucleation at the molecular level because the size of the critical nucleus and formation rate occur on the nano scale. Moreover, various analysis methods for nucleation have been developed through MD to analyze nucleation. In particular, the mean first-passage time (MFPT) and survival probability (SP)more » methods have proven to be effective in procuring the nucleation rate and critical nucleus size for monatomic systems. This study assesses the MFPT and SP methods, previously used for monatomic systems, when applied to analyzing clathrate hydrate nucleation. Because clathrate hydrate nucleation is relatively difficult to observe in MD simulations (due to its high free energy barrier), these methods have yet to be applied to clathrate hydrate systems. In this study, we have analyzed the nucleation rate and critical nucleus size of methane hydrate using MFPT and SP methods from data generated by MD simulations at 255 K and 50 MPa. MFPT was modified for clathrate hydrate from the original version by adding the maximum likelihood estimate and growth effect term. The nucleation rates were calculated by MFPT and SP methods and are within 5%; the critical nucleus size estimated by the MFPT method was 50% higher, than values obtained through other more rigorous but computationally expensive estimates. These methods can also be extended to the analysis of other clathrate hydrates.« less
NASA Astrophysics Data System (ADS)
Mulyana, Cukup; Muhammad, Fajar; Saad, Aswad H.; Mariah, Riveli, Nowo
2017-03-01
Storage tank component is the most critical component in LNG regasification terminal. It has the risk of failure and accident which impacts to human health and environment. Risk assessment is conducted to detect and reduce the risk of failure in storage tank. The aim of this research is determining and calculating the probability of failure in regasification unit of LNG. In this case, the failure is caused by Boiling Liquid Expanding Vapor Explosion (BLEVE) and jet fire in LNG storage tank component. The failure probability can be determined by using Fault Tree Analysis (FTA). Besides that, the impact of heat radiation which is generated is calculated. Fault tree for BLEVE and jet fire on storage tank component has been determined and obtained with the value of failure probability for BLEVE of 5.63 × 10-19 and for jet fire of 9.57 × 10-3. The value of failure probability for jet fire is high enough and need to be reduced by customizing PID scheme of regasification LNG unit in pipeline number 1312 and unit 1. The value of failure probability after customization has been obtained of 4.22 × 10-6.
NASA Astrophysics Data System (ADS)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.
Zhuang, Jiancang; Ogata, Yosihiko
2006-04-01
The space-time epidemic-type aftershock sequence model is a stochastic branching process in which earthquake activity is classified into background and clustering components and each earthquake triggers other earthquakes independently according to certain rules. This paper gives the probability distributions associated with the largest event in a cluster and their properties for all three cases when the process is subcritical, critical, and supercritical. One of the direct uses of these probability distributions is to evaluate the probability of an earthquake to be a foreshock, and magnitude distributions of foreshocks and nonforeshock earthquakes. To verify these theoretical results, the Japan Meteorological Agency earthquake catalog is analyzed. The proportion of events that have 1 or more larger descendants in total events is found to be as high as about 15%. When the differences between background events and triggered event in the behavior of triggering children are considered, a background event has a probability about 8% to be a foreshock. This probability decreases when the magnitude of the background event increases. These results, obtained from a complicated clustering model, where the characteristics of background events and triggered events are different, are consistent with the results obtained in [Ogata, Geophys. J. Int. 127, 17 (1996)] by using the conventional single-linked cluster declustering method.
Critical Two-Point Function for Long-Range O( n) Models Below the Upper Critical Dimension
NASA Astrophysics Data System (ADS)
Lohmann, Martin; Slade, Gordon; Wallace, Benjamin C.
2017-12-01
We consider the n-component |φ|^4 lattice spin model (n ≥ 1) and the weakly self-avoiding walk (n=0) on Z^d, in dimensions d=1,2,3. We study long-range models based on the fractional Laplacian, with spin-spin interactions or walk step probabilities decaying with distance r as r^{-(d+α )} with α \\in (0,2). The upper critical dimension is d_c=2α . For ɛ >0, and α = 1/2 (d+ɛ ), the dimension d=d_c-ɛ is below the upper critical dimension. For small ɛ , weak coupling, and all integers n ≥ 0, we prove that the two-point function at the critical point decays with distance as r^{-(d-α )}. This "sticking" of the critical exponent at its mean-field value was first predicted in the physics literature in 1972. Our proof is based on a rigorous renormalisation group method. The treatment of observables differs from that used in recent work on the nearest-neighbour 4-dimensional case, via our use of a cluster expansion.
Generic Degraded Congiguration Probability Analysis for DOE Codisposal Waste Package
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.F.A. Deng; M. Saglam; L.J. Gratton
2001-05-23
In accordance with the technical work plan, ''Technical Work Plan For: Department of Energy Spent Nuclear Fuel Work Packages'' (CRWMS M&O 2000c), this Analysis/Model Report (AMR) is developed for the purpose of screening out degraded configurations for U.S. Department of Energy (DOE) spent nuclear fuel (SNF) types. It performs the degraded configuration parameter and probability evaluations of the overall methodology specified in the ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2000, Section 3) to qualifying configurations. Degradation analyses are performed to assess realizable parameter ranges and physical regimes for configurations. Probability calculations are then performed for configurations characterized by k{submore » eff} in excess of the Critical Limit (CL). The scope of this document is to develop a generic set of screening criteria or models to screen out degraded configurations having potential for exceeding a criticality limit. The developed screening criteria include arguments based on physical/chemical processes and probability calculations and apply to DOE SNF types when codisposed with the high-level waste (HLW) glass inside a waste package. The degradation takes place inside the waste package and is long after repository licensing has expired. The emphasis of this AMR is on degraded configuration screening and the probability analysis is one of the approaches used for screening. The intended use of the model is to apply the developed screening criteria to each DOE SNF type following the completion of the degraded mode criticality analysis internal to the waste package.« less
The Safety Course Design and Operations of Composite Overwrapped Pressure Vessels (COPV)
NASA Technical Reports Server (NTRS)
Saulsberry, Regor; Prosser, William
2015-01-01
Following a Commercial Launch Vehicle On-Pad COPV (Composite Overwrapped Pressure Vessels) failure, a request was received by the NESC (NASA Engineering and Safety Center) June 14, 2014. An assessment was approved July 10, 2014, to develop and assess the capability of scanning eddy current (EC) nondestructive evaluation (NDE) methods for mapping thickness and inspection for flaws. Current methods could not identify thickness reduction from necking and critical flaw detection was not possible with conventional dye penetrant (PT) methods, so sensitive EC scanning techniques were needed. Developmental methods existed, but had not been fully developed, nor had the requisite capability assessment (i.e., a POD (Probability of Detection) study) been performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, W.C.; Graham, A.J.; Department of Physics and Astronomy, Appalachian State University, Boone, North Carolina 28608
We obtain the nucleation rate of critical droplets for an elastic string moving in a {phi}{sup 6} local potential and subject to noise and damping forces. The critical droplet is a bound soliton-antisoliton pair that carries a section of the string out of the metastable central minimum into one of the stable side minima. The frequencies of small oscillations about the critical droplet are obtained from a Heun equation. We solve the Fokker-Planck equation for the phase-space probability density by projecting it onto the eigenfunction basis obtained from the Heun equation. We employ Farkas' 'flux-overpopulation' method to obtain boundary conditionsmore » for solving the Fokker-Planck equation; these restrict the validity of our solution to the moderate to heavy damping regime. We present results for the rate as a function of temperature, well depth, and damping.« less
Kaddoura, Mahmoud A
2010-09-01
It is essential for nurses to develop critical thinking skills to ensure their ability to provide safe and effective care to patients with complex and variable needs in ever-changing clinical environments. To date, very few studies have been conducted to examine how nursing orientation programs develop the critical thinking skills of novice critical care nurses. Strikingly, no research studies could be found about the American Association of Critical Care Nurses Essentials of Critical Care Orientation (ECCO) program and specifically its effect on the development of nurses' critical thinking skills. This study explored the perceptions of new graduate nurses regarding factors that helped to develop their critical thinking skills throughout their 6-month orientation program in the intensive care unit. A convenient non-probability sample of eight new graduates was selected from a hospital that used the ECCO program. Data were collected with demographic questionnaires and semi-structured interviews. An exploratory qualitative research method with content analysis was used to analyze the data. The study findings showed that new graduate nurses perceived that they developed critical thinking skills that improved throughout the orientation period, although there were some challenges in the ECCO program. This study provides data that could influence the development and implementation of future nursing orientation programs. Copyright 2010, SLACK Incorporated.
ERIC Educational Resources Information Center
Pimentel, Eduarda; Albuquerque, Pedro B.
2013-01-01
The Deese/Roediger-McDermott (DRM) paradigm comprises the study of lists in which words (e.g., bed, pillow, etc.) are all associates of a single nonstudied critical item (e.g., sleep). The probability of falsely recalling or recognising nonstudied critical items is often similar to (or sometimes higher than) the probability of correctly recalling…
Identification of land degradation evidences in an organic farm using probability maps (Croatia)
NASA Astrophysics Data System (ADS)
Pereira, Paulo; Bogunovic, Igor; Estebaranz, Ferran
2017-04-01
Land degradation is a biophysical process with important impacts on society, economy and policy. Areas affected by land degradation do not provide services in quality and with capacity to full-field the communities that depends on them (Amaya-Romero et al., 2015; Beyene, 2015; Lanckriet et al., 2015). Agricultural activities are one of the main causes of land degradation (Kraaijvanger and Veldkamp, 2015), especially when they decrease soil organic matter (SOM), a crucial element for soil fertility. In temperate areas, the critical level of SOM concentration in agricultural soils is 3.4%. Below this level there is a potential decrease of soil quality (Loveland and Weeb, 2003). However, no previous work was carried out in other environments, such as the Mediterranean. The spatial distribution of potential degraded land is important to be identified and mapped, in order to identify the areas that need restoration (Brevik et al., 2016; Pereira et al., 2017). The aim of this work is to assess the spatial distribution of areas with evidences of land degradation (SOM bellow 3.4%) using probability maps in an organic farm located in Croatia. In order to find the best method, we compared several probability methods, such as Ordinary Kriging (OK), Simple Kriging (SK), Universal Kriging (UK), Indicator Kriging (IK), Probability Kriging (PK) and Disjunctive Kriging (DK). The study area is located on the Istria peninsula (45°3' N; 14°2' E), with a total area of 182 ha. One hundred eighty-two soil samples (0-30 cm) were collected during July of 2015 and SOM was assessed using wet combustion procedure. The assessment of the best probability method was carried out using leave one out cross validation method. The probability method with the lowest Root Mean Squared Error (RMSE) was the most accurate. The results showed that the best method to predict the probability of potential land degradation was SK with an RMSE of 0.635, followed by DK (RMSE=0.636), UK (RMSE=0.660), OK (RMSE=0.660), IK (RMSE=0.722) and PK (RMSE=1.661). According to the most accurate method, it is observed that the majority of the area studied has a high probability to be degraded. Measures are needed to restore this area. References Amaya-Romero, M., Abd-Elmabod, S., Munoz-Rojas, M., Castellano, G., Ceacero, C., Alvarez, S., Mendez, M., De la Rosa, D. (2015) Evaluating soil threats under climate change scenarios in the Andalusia region, Southern Spain. Land Degradation and Development, 26, 441-449. Beyene, F. (2015) Incentives and challenges in community based rangeland management: Evidence from Eastern Ethiopia. Land Degradation and Development, 26, 502-509. Brevik, E., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Baumgarten, A., Jordán, A. (2016) Historical perspectives and future needs in soil mapping, classification and pedological modelling, Geoderma, 264, Part B, 256-274. Kraaijvanger, R., Veldkamp, T. (2015) Grain productivity, fertilizer response and nutrient balance of farming systems in Tigray, Ethiopia: A Multiprespective view in relation do soil fertility degradation. Land Degradation and Development, 26, 701-710. Lanckriet, S., Derudder, B., Naudts, J., Bauer, H., Deckers, J., Haile, M., Nyssen, J. (2015) A political ecology perspective of land degradation in the North Ethiopian Highlands. Land Degradation and Development, 26, 521-530. Loveland, P., Weeb, J. (2003) Is there a critical level of organic matter in the agricultural soils of temperate regions: a review. Soil & Tillage Research, 70, 1-18. Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B., Smetanova, A., Depellegrin, D., Misiune, I., Novara, A., Cerda, A. Soil mapping and process modelling for sustainable land management. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006
Statistics of the work done on a quantum critical system by quenching a control parameter.
Silva, Alessandro
2008-09-19
We study the statistics of the work done on a quantum critical system by quenching a control parameter in the Hamiltonian. We elucidate the relation between the probability distribution of the work and the Loschmidt echo, a quantity emerging usually in the context of dephasing. Using this connection we characterize the statistics of the work done on a quantum Ising chain by quenching locally or globally the transverse field. We show that for local quenches starting at criticality the probability distribution of the work displays an interesting edge singularity.
Critical behavior of the contact process on small-world networks
NASA Astrophysics Data System (ADS)
Ferreira, Ronan S.; Ferreira, Silvio C.
2013-11-01
We investigate the role of clustering on the critical behavior of the contact process (CP) on small-world networks using the Watts-Strogatz (WS) network model with an edge rewiring probability p. The critical point is well predicted by a homogeneous cluster-approximation for the limit of vanishing clustering ( p → 1). The critical exponents and dimensionless moment ratios of the CP are in agreement with those predicted by the mean-field theory for any p > 0. This independence on the network clustering shows that the small-world property is a sufficient condition for the mean-field theory to correctly predict the universality of the model. Moreover, we compare the CP dynamics on WS networks with rewiring probability p = 1 and random regular networks and show that the weak heterogeneity of the WS network slightly changes the critical point but does not alter other critical quantities of the model.
NASA Astrophysics Data System (ADS)
Klügel, J.
2006-12-01
Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.
Applications of conformal field theory to problems in 2D percolation
NASA Astrophysics Data System (ADS)
Simmons, Jacob Joseph Harris
This thesis explores critical two-dimensional percolation in bounded regions in the continuum limit. The main method which we employ is conformal field theory (CFT). Our specific results follow from the null-vector structure of the c = 0 CFT that applies to critical two-dimensional percolation. We also make use of the duality symmetry obeyed at the percolation point, and the fact that percolation may be understood as the q-state Potts model in the limit q → 1. Our first results describe the correlations between points in the bulk and boundary intervals or points, i.e. the probability that the various points or intervals are in the same percolation cluster. These quantities correspond to order-parameter profiles under the given conditions, or cluster connection probabilities. We consider two specific cases: an anchoring interval, and two anchoring points. We derive results for these and related geometries using the CFT null-vectors for the corresponding boundary condition changing (bcc) operators. In addition, we exhibit several exact relationships between these probabilities. These relations between the various bulk-boundary connection probabilities involve parameters of the CFT called operator product expansion (OPE) coefficients. We then compute several of these OPE coefficients, including those arising in our new probability relations. Beginning with the familiar CFT operator φ1,2, which corresponds to a free-fixed spin boundary change in the q-state Potts model, we then develop physical interpretations of the bcc operators. We argue that, when properly normalized, higher-order bcc operators correspond to successive fusions of multiple φ1,2, operators. Finally, by identifying the derivative of φ1,2 with the operator φ1,4, we derive several new quantities called first crossing densities. These new results are then combined and integrated to obtain the three previously known crossing quantities in a rectangle: the probability of a horizontal crossing cluster, the probability of a cluster crossing both horizontally and vertically, and the expected number of horizontal crossing clusters. These three results were known to be solutions to a certain fifth-order differential equation, but until now no physically meaningful explanation had appeared. This differential equation arises naturally in our derivation.
NASA Technical Reports Server (NTRS)
Nemeth, Noel
2013-01-01
Models that predict the failure probability of monolithic glass and ceramic components under multiaxial loading have been developed by authors such as Batdorf, Evans, and Matsuo. These "unit-sphere" failure models assume that the strength-controlling flaws are randomly oriented, noninteracting planar microcracks of specified geometry but of variable size. This report develops a formulation to describe the probability density distribution of the orientation of critical strength-controlling flaws that results from an applied load. This distribution is a function of the multiaxial stress state, the shear sensitivity of the flaws, the Weibull modulus, and the strength anisotropy. Examples are provided showing the predicted response on the unit sphere for various stress states for isotropic and transversely isotropic (anisotropic) materials--including the most probable orientation of critical flaws for offset uniaxial loads with strength anisotropy. The author anticipates that this information could be used to determine anisotropic stiffness degradation or anisotropic damage evolution for individual brittle (or quasi-brittle) composite material constituents within finite element or micromechanics-based software
A method for mapping flood hazard along roads.
Kalantari, Zahra; Nickman, Alireza; Lyon, Steve W; Olofsson, Bo; Folkeson, Lennart
2014-01-15
A method was developed for estimating and mapping flood hazard probability along roads using road and catchment characteristics as physical catchment descriptors (PCDs). The method uses a Geographic Information System (GIS) to derive candidate PCDs and then identifies those PCDs that significantly predict road flooding using a statistical modelling approach. The method thus allows flood hazards to be estimated and also provides insights into the relative roles of landscape characteristics in determining road-related flood hazards. The method was applied to an area in western Sweden where severe road flooding had occurred during an intense rain event as a case study to demonstrate its utility. The results suggest that for this case study area three categories of PCDs are useful for prediction of critical spots prone to flooding along roads: i) topography, ii) soil type, and iii) land use. The main drivers among the PCDs considered were a topographical wetness index, road density in the catchment, soil properties in the catchment (mainly the amount of gravel substrate) and local channel slope at the site of a road-stream intersection. These can be proposed as strong indicators for predicting the flood probability in ungauged river basins in this region, but some care is needed in generalising the case study results other potential factors are also likely to influence the flood hazard probability. Overall, the method proposed represents a straightforward and consistent way to estimate flooding hazards to inform both the planning of future roadways and the maintenance of existing roadways. Copyright © 2013 Elsevier Ltd. All rights reserved.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Emergence of cooperation with self-organized criticality
NASA Astrophysics Data System (ADS)
Park, Sangmin; Jeong, Hyeong-Chai
2012-02-01
Cooperation and self-organized criticality are two main keywords in current studies of evolution. We propose a generalized Bak-Sneppen model and provide a natural mechanism which accounts for both phenomena simultaneously. We use the prisoner's dilemma games to mimic the interactions among the members in the population. Each member is identified by its cooperation probability, and its fitness is given by the payoffs from neighbors. The least fit member with the minimum payoff is replaced by a new member with a random cooperation probability. When the neighbors of the least fit one are also replaced with a non-zero probability, a strong cooperation emerges. The Bak-Sneppen process builds a self-organized structure so that the cooperation can emerge even in the parameter region where a uniform or random population decreases the number of cooperators. The emergence of cooperation is due to the same dynamical correlation that leads to self-organized criticality in replacement activities.
Anomaly-based intrusion detection for SCADA systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D.; Usynin, A.; Hines, J. W.
2006-07-01
Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper willmore » briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)« less
Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...
2014-11-01
This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
A short course on measure and probability theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre
2004-02-01
This brief Introduction to Measure Theory, and its applications to Probabilities, corresponds to the lecture notes of a seminar series given at Sandia National Laboratories in Livermore, during the spring of 2003. The goal of these seminars was to provide a minimal background to Computational Combustion scientists interested in using more advanced stochastic concepts and methods, e.g., in the context of uncertainty quantification. Indeed, most mechanical engineering curricula do not provide students with formal training in the field of probability, and even in less in measure theory. However, stochastic methods have been used more and more extensively in the pastmore » decade, and have provided more successful computational tools. Scientists at the Combustion Research Facility of Sandia National Laboratories have been using computational stochastic methods for years. Addressing more and more complex applications, and facing difficult problems that arose in applications showed the need for a better understanding of theoretical foundations. This is why the seminar series was launched, and these notes summarize most of the concepts which have been discussed. The goal of the seminars was to bring a group of mechanical engineers and computational combustion scientists to a full understanding of N. WIENER'S polynomial chaos theory. Therefore, these lectures notes are built along those lines, and are not intended to be exhaustive. In particular, the author welcomes any comments or criticisms.« less
Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie
2017-01-01
Purpose The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived Modified Barium Swallow Impairment Profile (MBSImP™©; Martin-Harris et al., 2008) Overall Impression (OI; worst) scores using generalized estimating equations. The range of probabilities across swallowing tasks was calculated to discern which swallowing task(s) yielded the worst performance. Results Large-volume, thin-liquid swallowing tasks had the highest probabilities of yielding the OI scores for oral containment and airway protection. The cookie swallowing task was most likely to yield OI scores for oral clearance. Several swallowing tasks had nearly equal probabilities (≤ .20) of yielding the OI score. Conclusions The MBSS must represent impairment while requiring boluses that challenge the swallowing system. No single swallowing task had a sufficiently high probability to yield the identification of the worst score for each physiological component. Omission of swallowing tasks will likely fail to capture the most severe impairment for physiological components critical for safe and efficient swallowing. Results provide further support for standardized, well-tested protocols during MBSS. PMID:28614846
Decision theory for computing variable and value ordering decisions for scheduling problems
NASA Technical Reports Server (NTRS)
Linden, Theodore A.
1993-01-01
Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
Rahman, Ziyaur; Xu, Xiaoming; Katragadda, Usha; Krishnaiah, Yellela S R; Yu, Lawrence; Khan, Mansoor A
2014-03-03
Restasis is an ophthalmic cyclosporine emulsion used for the treatment of dry eye syndrome. There are no generic products for this product, probably because of the limitations on establishing in vivo bioequivalence methods and lack of alternative in vitro bioequivalence testing methods. The present investigation was carried out to understand and identify the appropriate in vitro methods that can discriminate the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion formulations having the same qualitative (Q1) and quantitative (Q2) composition as that of Restasis. Quality by design (QbD) approach was used to understand the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion. The formulation variables chosen were mixing order method, phase volume ratio, and pH adjustment method, while the process variables were temperature of primary and raw emulsion formation, microfluidizer pressure, and number of pressure cycles. The responses selected were particle size, turbidity, zeta potential, viscosity, osmolality, surface tension, contact angle, pH, and drug diffusion. The selected independent variables showed statistically significant (p < 0.05) effect on droplet size, zeta potential, viscosity, turbidity, and osmolality. However, the surface tension, contact angle, pH, and drug diffusion were not significantly affected by independent variables. In summary, in vitro methods can detect formulation and manufacturing changes and would thus be important for quality control or sameness of cyclosporine ophthalmic products.
NASA Astrophysics Data System (ADS)
Smith, Leonard A.
2010-05-01
This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask whether or not probabilities based on imperfect models can be expected to yield probabilistic odds which are sustainable. Evidence is provided that suggest this is not the case. Even with very good models (good in an Root-Mean-Square sense), the risk of ruin of probabilistic odds is significantly higher than might be expected. Methods for constructing model-based non-probabilistic odds which are sustainable are discussed. The aim here is to be relevant to real world decision support, and so unrealistic assumptions of equal knowledge, equal compute power, or equal access to information are to be avoided. Finally, the use of non-probabilistic odds as a method for communicating deep uncertainty (uncertainty in a probability forecast itself) is discussed in the context of other methods, such as stating one's subjective probability that the models will prove inadequate in each particular instance (that is, the Probability of a "Big Surprise").
Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2013-01-01
The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less
Advances in local anesthesia in dentistry.
Ogle, Orrett E; Mahjoubi, Ghazal
2011-07-01
Local pain management is the most critical aspect of patient care in dentistry. The improvements in agents and techniques for local anesthesia are probably the most significant advances that have occurred in dental science. This article provides an update on the most recently introduced local anesthetic agents along with new technologies used to deliver local anesthetics. Safety devices are also discussed, along with an innovative method for reducing the annoying numbness of the lip and tongue following local anesthesia. Copyright © 2011 Elsevier Inc. All rights reserved.
Katsios, Christina; Donadini, Marco; Meade, Maureen; Mehta, Sangeeta; Hall, Richard; Granton, John; Kutsogiannis, Jim; Dodek, Peter; Heels-Ansdell, Diane; McIntyre, Lauralynn; Vlahakis, Nikolas; Muscedere, John; Friedrich, Jan; Fowler, Robert; Skrobik, Yoanna; Albert, Martin; Cox, Michael; Klinger, James; Nates, Joseph; Bersten, Andrew; Doig, Chip; Zytaruk, Nicole; Crowther, Mark; Cook, Deborah J
2014-01-01
Prediction scores for pretest probability of pulmonary embolism (PE) validated in outpatient settings are occasionally used in the intensive care unit (ICU). To evaluate the correlation of Geneva and Wells scores with adjudicated categories of PE in ICU patients. In a randomized trial of thromboprophylaxis, patients with suspected PE were adjudicated as possible, probable or definite PE. Data were then retrospectively abstracted for the Geneva Diagnostic PE score, Wells, Modified Wells and Simplified Wells Diagnostic scores. The chance-corrected agreement between adjudicated categories and each score was calculated. ANOVA was used to compare values across the three adjudicated PE categories. Among 70 patients with suspected PE, agreement was poor between adjudicated categories and Geneva pretest probabilities (kappa=0.01 [95% CI -0.0643 to 0.0941]) or Wells pretest probabilities (kappa=-0.03 [95% CI -0.1462 to 0.0914]). Among four possible, 16 probable and 50 definite PEs, there were no significant differences in Geneva scores (possible = 4.0, probable = 4.7, definite = 4.5; P=0.90), Wells scores (possible = 2.8, probable = 4.9, definite = 4.1; P=0.37), Modified Wells (possible = 2.0, probable = 3.4, definite = 2.9; P=0.34) or Simplified Wells (possible = 1.8, probable = 2.8, definite = 2.4; P=0.30). Pretest probability scores developed outside the ICU do not correlate with adjudicated PE categories in critically ill patients. Research is needed to develop prediction scores for this population.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Shih-Jung
Dynamic strength of the High Flux Isotope Reactor (HFIR) vessel to resist hypothetical accidents is analyzed by using the method of fracture mechanics. Vessel critical stresses are estimated by applying dynamic pressure pulses of a range of magnitudes and pulse-durations. The pulses versus time functions are assumed to be step functions. The probability of vessel fracture is then calculated by assuming a distribution of possible surface cracks of different crack depths. The probability distribution function for the crack depths is based on the form that is recommended by the Marshall report. The toughness of the vessel steel used in themore » analysis is based on the projected and embrittled value after 10 effective full power years from 1986. From the study made by Cheverton, Merkle and Nanstad, the weakest point on the vessel for fracture evaluation is known to be located within the region surrounding the tangential beam tube HB3. The increase in the probability of fracture is obtained as an extension of the result from that report for the regular operating condition to include conditions of higher dynamic pressures due to accident loadings. The increase in the probability of vessel fracture is plotted for a range of hoop stresses to indicate the vessel strength against hypothetical accident conditions.« less
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.
Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S
2018-06-06
Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.
Statistical methods for identifying and bounding a UXO target area or minefield
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.
2003-09-18
The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less
Regional rainfall thresholds for landslide occurrence using a centenary database
NASA Astrophysics Data System (ADS)
Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia
2018-04-01
This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.
2013-01-01
Background Tools to support clinical or patient decision-making in the treatment/management of a health condition are used in a range of clinical settings for numerous preference-sensitive healthcare decisions. Their impact in clinical practice is largely dependent on their quality across a range of domains. We critically analysed currently available tools to support decision making or patient understanding in the treatment of acute ischaemic stroke with intravenous thrombolysis, as an exemplar to provide clinicians/researchers with practical guidance on development, evaluation and implementation of such tools for other preference-sensitive treatment options/decisions in different clinical contexts. Methods Tools were identified from bibliographic databases, Internet searches and a survey of UK and North American stroke networks. Two reviewers critically analysed tools to establish: information on benefits/risks of thrombolysis included in tools, and the methods used to convey probabilistic information (verbal descriptors, numerical and graphical); adherence to guidance on presenting outcome probabilities (IPDASi probabilities items) and information content (Picker Institute Checklist); readability (Fog Index); and the extent that tools had comprehensive development processes. Results Nine tools of 26 identified included information on a full range of benefits/risks of thrombolysis. Verbal descriptors, frequencies and percentages were used to convey probabilistic information in 20, 19 and 18 tools respectively, whilst nine used graphical methods. Shortcomings in presentation of outcome probabilities (e.g. omitting outcomes without treatment) were identified. Patient information tools had an aggregate median Fog index score of 10. None of the tools had comprehensive development processes. Conclusions Tools to support decision making or patient understanding in the treatment of acute stroke with thrombolysis have been sub-optimally developed. Development of tools should utilise mixed methods and strategies to meaningfully involve clinicians, patients and their relatives in an iterative design process; include evidence-based methods to augment interpretability of textual and probabilistic information (e.g. graphical displays showing natural frequencies) on the full range of outcome states associated with available options; and address patients with different levels of health literacy. Implementation of tools will be enhanced when mechanisms are in place to periodically assess the relevance of tools and where necessary, update the mode of delivery, form and information content. PMID:23777368
Criticality and Phase Transition in Stock-Price Fluctuations
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2006-02-01
We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.
NASA Technical Reports Server (NTRS)
Hudson, C. M.; Lewis, P. E.
1979-01-01
A round-robin study was conducted which evaluated and compared different methods currently in practice for predicting crack growth in surface-cracked specimens. This report describes the prediction methods used by the Fracture Mechanics Engineering Section, at NASA-Langley Research Center, and presents a comparison between predicted crack growth and crack growth observed in laboratory experiments. For tests at higher stress levels, the correlation between predicted and experimentally determined crack growth was generally quite good. For tests at lower stress levels, the predicted number of cycles to reach a given crack length was consistently higher than the experimentally determined number of cycles. This consistent overestimation of the number of cycles could have resulted from a lack of definition of crack-growth data at low values of the stress intensity range. Generally, the predicted critical flaw sizes were smaller than the experimentally determined critical flaw sizes. This underestimation probably resulted from using plane-strain fracture toughness values to predict failure rather than the more appropriate values based on maximum load.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education.
Masel, J; Humphrey, P T; Blackburn, B; Levine, J A
2015-01-01
Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students' intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes' theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. © 2015 J. Masel et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Shao, Jingyuan; Cao, Wen; Qu, Haibin; Pan, Jianyang; Gong, Xingchu
2018-01-01
The aim of this study was to present a novel analytical quality by design (AQbD) approach for developing an HPLC method to analyze herbal extracts. In this approach, critical method attributes (CMAs) and critical method parameters (CMPs) of the analytical method were determined using the same data collected from screening experiments. The HPLC-ELSD method for separation and quantification of sugars in Codonopsis Radix extract (CRE) samples and Astragali Radix extract (ARE) samples was developed as an example method with a novel AQbD approach. Potential CMAs and potential CMPs were found with Analytical Target Profile. After the screening experiments, the retention time of the D-glucose peak of CRE samples, the signal-to-noise ratio of the D-glucose peak of CRE samples, and retention time of the sucrose peak in ARE samples were considered CMAs. The initial and final composition of the mobile phase, flow rate, and column temperature were found to be CMPs using a standard partial regression coefficient method. The probability-based design space was calculated using a Monte-Carlo simulation method and verified by experiments. The optimized method was validated to be accurate and precise, and then it was applied in the analysis of CRE and ARE samples. The present AQbD approach is efficient and suitable for analysis objects with complex compositions.
Deep brain stimulation abolishes slowing of reactions to unlikely stimuli.
Antoniades, Chrystalina A; Bogacz, Rafal; Kennard, Christopher; FitzGerald, James J; Aziz, Tipu; Green, Alexander L
2014-08-13
The cortico-basal-ganglia circuit plays a critical role in decision making on the basis of probabilistic information. Computational models have suggested how this circuit could compute the probabilities of actions being appropriate according to Bayes' theorem. These models predict that the subthalamic nucleus (STN) provides feedback that normalizes the neural representation of probabilities, such that if the probability of one action increases, the probabilities of all other available actions decrease. Here we report the results of an experiment testing a prediction of this theory that disrupting information processing in the STN with deep brain stimulation should abolish the normalization of the neural representation of probabilities. In our experiment, we asked patients with Parkinson's disease to saccade to a target that could appear in one of two locations, and the probability of the target appearing in each location was periodically changed. When the stimulator was switched off, the target probability affected the reaction times (RT) of patients in a similar way to healthy participants. Specifically, the RTs were shorter for more probable targets and, importantly, they were longer for the unlikely targets. When the stimulator was switched on, the patients were still faster for more probable targets, but critically they did not increase RTs as the target was becoming less likely. This pattern of results is consistent with the prediction of the model that the patients on DBS no longer normalized their neural representation of prior probabilities. We discuss alternative explanations for the data in the context of other published results. Copyright © 2014 the authors 0270-6474/14/3410844-09$15.00/0.
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
Knotting probability of a shaken ball-chain.
Hickford, J; Jones, R; du Pont, S Courrech; Eggers, J
2006-11-01
We study the formation of knots on a macroscopic ball chain, which is shaken on a horizontal plate at 12 times the acceleration of gravity. We find that above a certain critical length, the knotting probability is independent of chain length, while the time to shake out a knot increases rapidly with chain length. The probability of finding a knot after a certain time is the result of the balance of these two processes. In particular, the knotting probability tends to a constant for long chains.
Red List of macrofaunal benthic invertebrates of the Wadden Sea
NASA Astrophysics Data System (ADS)
Petersen, G. H.; Madsen, P. B.; Jensen, K. T.; van Bernem, K. H.; Harms, J.; Heiber, W.; Kröncke, I.; Michaelis, H.; Rachor, E.; Reise, K.; Dekker, R.; Visser, G. J. M.; Wolff, W. J.
1996-10-01
In the Wadden Sea, in total, 93 species of macrofaunal benthic invertebrates are threatened in at least one subregion. Of these, 72 species are threatened in the entire area and are therefore placed on the trilateral Red List. 7 species are (probably) extinct in the entire Wadden Sea area. The status of 9 species of macrofaunal invertebrates is critical, 13 species are (probably) endangered, the status of 25 species is (probably) vulnerable and of 17 species (probably) susceptible.
Research on Application of FMECA in Missile Equipment Maintenance Decision
NASA Astrophysics Data System (ADS)
Kun, Wang
2018-03-01
Fault mode effects and criticality analysis (FMECA) is a method widely used in engineering. Studying the application of FMEA technology in military equipment maintenance decision-making, can help us build a better equipment maintenance support system, and increase the using efficiency of weapons and equipment. Through Failure Modes, Effects and Criticality Analysis (FMECA) of equipment, known and potential failure modes and their causes are found out, and the influence on the equipment performance, operation success, personnel security are determined. Furthermore, according to the synthetical effects of the severity of effects and the failure probability, possible measures for prevention and correction are put forward. Through replacing or adjusting the corresponding parts, corresponding maintenance strategy is decided for preventive maintenance of equipment, which helps improve the equipment reliability.
Voga, Gorazd
2008-01-01
The measurement of pulmonary artery occlusion pressure (PAOP) is important for estimation of left ventricular filling pressure and for distinction between cardiac and non-cardiac etiology of pulmonary edema. Clinical assessment of PAOP, which relies on physical signs of pulmonary congestion, is uncertain. Reliable PAOP measurement can be performed by pulmonary artery catheter, but it is possible also by the use of echocardiography. Several Doppler variables show acceptable correlation with PAOP and can be used for its estimation in cardiac and critically ill patients. Noninvasive PAOP estimation should probably become an integral part of transthoracic and transesophageal echocardiographic evaluation in critically ill patients. However, the limitations of both methods should be taken into consideration, and in specific patients invasive PAOP measurement is still unavoidable, if the exact value of PAOP is needed.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
NASA Astrophysics Data System (ADS)
Odbert, H. M.; Aspinall, W.; Phillips, J.; Jenkins, S.; Wilson, T. M.; Scourse, E.; Sheldrake, T.; Tucker, P.; Nakeshree, K.; Bernardara, P.; Fish, K.
2015-12-01
Societies rely on critical services such as power, water, transport networks and manufacturing. Infrastructure may be sited to minimise exposure to natural hazards but not all can be avoided. The probability of long-range transport of a volcanic plume to a site is comparable to other external hazards that must be considered to satisfy safety assessments. Recent advances in numerical models of plume dispersion and stochastic modelling provide a formalized and transparent approach to probabilistic assessment of hazard distribution. To understand the risks to critical infrastructure far from volcanic sources, it is necessary to quantify their vulnerability to different hazard stressors. However, infrastructure assets (e.g. power plantsand operational facilities) are typically complex systems in themselves, with interdependent components that may differ in susceptibility to hazard impact. Usually, such complexity means that risk either cannot be estimated formally or that unsatisfactory simplifying assumptions are prerequisite to building a tractable risk model. We present a new approach to quantifying risk by bridging expertise of physical hazard modellers and infrastructure engineers. We use a joint expert judgment approach to determine hazard model inputs and constrain associated uncertainties. Model outputs are chosen on the basis of engineering or operational concerns. The procedure facilitates an interface between physical scientists, with expertise in volcanic hazards, and infrastructure engineers, with insight into vulnerability to hazards. The result is a joined-up approach to estimating risk from low-probability hazards to critical infrastructure. We describe our methodology and show preliminary results for vulnerability to volcanic hazards at a typical UK industrial facility. We discuss our findings in the context of developing bespoke assessment of hazards from distant sources in collaboration with key infrastructure stakeholders.
Study design in high-dimensional classification analysis.
Sánchez, Brisa N; Wu, Meihua; Song, Peter X K; Wang, Wen
2016-10-01
Advances in high throughput technology have accelerated the use of hundreds to millions of biomarkers to construct classifiers that partition patients into different clinical conditions. Prior to classifier development in actual studies, a critical need is to determine the sample size required to reach a specified classification precision. We develop a systematic approach for sample size determination in high-dimensional (large [Formula: see text] small [Formula: see text]) classification analysis. Our method utilizes the probability of correct classification (PCC) as the optimization objective function and incorporates the higher criticism thresholding procedure for classifier development. Further, we derive the theoretical bound of maximal PCC gain from feature augmentation (e.g. when molecular and clinical predictors are combined in classifier development). Our methods are motivated and illustrated by a study using proteomics markers to classify post-kidney transplantation patients into stable and rejecting classes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Quantum Dynamical Applications of Salem's Theorem
NASA Astrophysics Data System (ADS)
Damanik, David; Del Rio, Rafael
2009-07-01
We consider the survival probability of a state that evolves according to the Schrödinger dynamics generated by a self-adjoint operator H. We deduce from a classical result of Salem that upper bounds for the Hausdorff dimension of a set supporting the spectral measure associated with the initial state imply lower bounds on a subsequence of time scales for the survival probability. This general phenomenon is illustrated with applications to the Fibonacci operator and the critical almost Mathieu operator. In particular, this gives the first quantitative dynamical bound for the critical almost Mathieu operator.
Effects of Vertex Activity and Self-organized Criticality Behavior on a Weighted Evolving Network
NASA Astrophysics Data System (ADS)
Zhang, Gui-Qing; Yang, Qiu-Ying; Chen, Tian-Lun
2008-08-01
Effects of vertex activity have been analyzed on a weighted evolving network. The network is characterized by the probability distribution of vertex strength, each edge weight and evolution of the strength of vertices with different vertex activities. The model exhibits self-organized criticality behavior. The probability distribution of avalanche size for different network sizes is also shown. In addition, there is a power law relation between the size and the duration of an avalanche and the average of avalanche size has been studied for different vertex activities.
Incorporating uncertainty into medical decision making: an approach to unexpected test results.
Bianchi, Matt T; Alexander, Brian M; Cash, Sydney S
2009-01-01
The utility of diagnostic tests derives from the ability to translate the population concepts of sensitivity and specificity into information that will be useful for the individual patient: the predictive value of the result. As the array of available diagnostic testing broadens, there is a temptation to de-emphasize history and physical findings and defer to the objective rigor of technology. However, diagnostic test interpretation is not always straightforward. One significant barrier to routine use of probability-based test interpretation is the uncertainty inherent in pretest probability estimation, the critical first step of Bayesian reasoning. The context in which this uncertainty presents the greatest challenge is when test results oppose clinical judgment. It is this situation when decision support would be most helpful. The authors propose a simple graphical approach that incorporates uncertainty in pretest probability and has specific application to the interpretation of unexpected results. This method quantitatively demonstrates how uncertainty in disease probability may be amplified when test results are unexpected (opposing clinical judgment), even for tests with high sensitivity and specificity. The authors provide a simple nomogram for determining whether an unexpected test result suggests that one should "switch diagnostic sides.'' This graphical framework overcomes the limitation of pretest probability uncertainty in Bayesian analysis and guides decision making when it is most challenging: interpretation of unexpected test results.
Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-09-20
These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less
Cluster geometry and survival probability in systems driven by reaction diffusion dynamics
NASA Astrophysics Data System (ADS)
Windus, Alastair; Jensen, Henrik J.
2008-11-01
We consider a reaction-diffusion model incorporating the reactions A→phi, A→2A and 2A→3A. Depending on the relative rates for sexual and asexual reproduction of the quantity A, the model exhibits either a continuous or first-order absorbing phase transition to an extinct state. A tricritical point separates the two phase lines. While we comment on this critical behaviour, the main focus of the paper is on the geometry of the population clusters that form. We observe the different cluster structures that arise at criticality for the three different types of critical behaviour and show that there exists a linear relationship for the survival probability against initial cluster size at the tricritical point only.
Probability of Failure Analysis Standards and Guidelines for Expendable Launch Vehicles
NASA Astrophysics Data System (ADS)
Wilde, Paul D.; Morse, Elisabeth L.; Rosati, Paul; Cather, Corey
2013-09-01
Recognizing the central importance of probability of failure estimates to ensuring public safety for launches, the Federal Aviation Administration (FAA), Office of Commercial Space Transportation (AST), the National Aeronautics and Space Administration (NASA), and U.S. Air Force (USAF), through the Common Standards Working Group (CSWG), developed a guide for conducting valid probability of failure (POF) analyses for expendable launch vehicles (ELV), with an emphasis on POF analysis for new ELVs. A probability of failure analysis for an ELV produces estimates of the likelihood of occurrence of potentially hazardous events, which are critical inputs to launch risk analysis of debris, toxic, or explosive hazards. This guide is intended to document a framework for POF analyses commonly accepted in the US, and should be useful to anyone who performs or evaluates launch risk analyses for new ELVs. The CSWG guidelines provide performance standards and definitions of key terms, and are being revised to address allocation to flight times and vehicle response modes. The POF performance standard allows a launch operator to employ alternative, potentially innovative methodologies so long as the results satisfy the performance standard. Current POF analysis practice at US ranges includes multiple methodologies described in the guidelines as accepted methods, but not necessarily the only methods available to demonstrate compliance with the performance standard. The guidelines include illustrative examples for each POF analysis method, which are intended to illustrate an acceptable level of fidelity for ELV POF analyses used to ensure public safety. The focus is on providing guiding principles rather than "recipe lists." Independent reviews of these guidelines were performed to assess their logic, completeness, accuracy, self- consistency, consistency with risk analysis practices, use of available information, and ease of applicability. The independent reviews confirmed the general validity of the performance standard approach and suggested potential updates to improve the accuracy each of the example methods, especially to address reliability growth.
Structural Analysis Made 'NESSUSary'
NASA Technical Reports Server (NTRS)
2005-01-01
Everywhere you look, chances are something that was designed and tested by a computer will be in plain view. Computers are now utilized to design and test just about everything imaginable, from automobiles and airplanes to bridges and boats, and elevators and escalators to streets and skyscrapers. Computer-design engineering first emerged in the 1970s, in the automobile and aerospace industries. Since computers were in their infancy, however, architects and engineers during the time were limited to producing only designs similar to hand-drafted drawings. (At the end of 1970s, a typical computer-aided design system was a 16-bit minicomputer with a price tag of $125,000.) Eventually, computers became more affordable and related software became more sophisticated, offering designers the "bells and whistles" to go beyond the limits of basic drafting and rendering, and venture into more skillful applications. One of the major advancements was the ability to test the objects being designed for the probability of failure. This advancement was especially important for the aerospace industry, where complicated and expensive structures are designed. The ability to perform reliability and risk assessment without using extensive hardware testing is critical to design and certification. In 1984, NASA initiated the Probabilistic Structural Analysis Methods (PSAM) project at Glenn Research Center to develop analysis methods and computer programs for the probabilistic structural analysis of select engine components for current Space Shuttle and future space propulsion systems. NASA envisioned that these methods and computational tools would play a critical role in establishing increased system performance and durability, and assist in structural system qualification and certification. Not only was the PSAM project beneficial to aerospace, it paved the way for a commercial risk- probability tool that is evaluating risks in diverse, down- to-Earth application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Blackmail propagation on small-world networks
NASA Astrophysics Data System (ADS)
Shao, Zhi-Gang; Jian-Ping Sang; Zou, Xian-Wu; Tan, Zhi-Jie; Jin, Zhun-Zhi
2005-06-01
The dynamics of the blackmail propagation model based on small-world networks is investigated. It is found that for a given transmitting probability λ the dynamical behavior of blackmail propagation transits from linear growth type to logistical growth one with the network randomness p increases. The transition takes place at the critical network randomness pc=1/N, where N is the total number of nodes in the network. For a given network randomness p the dynamical behavior of blackmail propagation transits from exponential decrease type to logistical growth one with the transmitting probability λ increases. The transition occurs at the critical transmitting probability λc=1/
ERIC Educational Resources Information Center
Bao, Lei; Redish, Edward F.
2002-01-01
Explains the critical role of probability in making sense of quantum physics and addresses the difficulties science and engineering undergraduates experience in helping students build a model of how to think about probability in physical systems. (Contains 17 references.) (Author/YDS)
Role of epistasis on the fixation probability of a non-mutator in an adapted asexual population.
James, Ananthu
2016-10-21
The mutation rate of a well adapted population is prone to reduction so as to have a lower mutational load. We aim to understand the role of epistatic interactions between the fitness affecting mutations in this process. Using a multitype branching process, the fixation probability of a single non-mutator emerging in a large asexual mutator population is analytically calculated here. The mutator population undergoes deleterious mutations at constant, but at a much higher rate than that of the non-mutator. We find that antagonistic epistasis lowers the chances of mutation rate reduction, while synergistic epistasis enhances it. Below a critical value of epistasis, the fixation probability behaves non-monotonically with variation in the mutation rate of the background population. Moreover, the variation of this critical value of the epistasis parameter with the strength of the mutator is discussed in the appendix. For synergistic epistasis, when selection is varied, the fixation probability reduces overall, with damped oscillations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah
2018-04-01
Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.
Redundant actuator development study. [flight control systems for supersonic transport aircraft
NASA Technical Reports Server (NTRS)
Ryder, D. R.
1973-01-01
Current and past supersonic transport configurations are reviewed to assess redundancy requirements for future airplane control systems. Secondary actuators used in stability augmentation systems will probably be the most critical actuator application and require the highest level of redundancy. Two methods of actuator redundancy mechanization have been recommended for further study. Math models of the recommended systems have been developed for use in future computer simulations. A long range plan has been formulated for actuator hardware development and testing in conjunction with the NASA Flight Simulator for Advanced Aircraft.
Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.
2017-07-17
The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.
Distressing situations in the intensive care unit: a descriptive study of nurses' responses.
McClendon, Heather; Buckner, Ellen B
2007-01-01
Moral distress is a significant stressor for nurses in critical care. Feeling that they are doing the "right thing" is important to nurses, and situations of moral distress can make them question their work. The purpose of this study was to describe critical care nurses' levels of moral distress, the effects of that distress on their personal and professional lives, and nurses' coping strategies. The study consisted of open-ended questions to elicit qualitatively the nurses' feelings about moral distress and a quantitative measure of the degree of distress caused by certain types of situations. The questionnaires were then analyzed to assess the nurses' opinions regarding moral distress, how their self-perceived job performance is affected, and what coping methods they use to deal with moral distress. The most frequently encountered moral distress situations involved critically ill patients whose families wished to continue aggressive treatment when it probably would not benefit the patient in the end.
Review of sampling hard-to-reach and hidden populations for HIV surveillance.
Magnani, Robert; Sabin, Keith; Saidel, Tobi; Heckathorn, Douglas
2005-05-01
Adequate surveillance of hard-to-reach and 'hidden' subpopulations is crucial to containing the HIV epidemic in low prevalence settings and in slowing the rate of transmission in high prevalence settings. For a variety of reasons, however, conventional facility and survey-based surveillance data collection strategies are ineffective for a number of key subpopulations, particularly those whose behaviors are illegal or illicit. This paper critically reviews alternative sampling strategies for undertaking behavioral or biological surveillance surveys of such groups. Non-probability sampling approaches such as facility-based sentinel surveillance and snowball sampling are the simplest to carry out, but are subject to a high risk of sampling/selection bias. Most of the probability sampling methods considered are limited in that they are adequate only under certain circumstances and for some groups. One relatively new method, respondent-driven sampling, an adaptation of chain-referral sampling, appears to be the most promising for general applications. However, as its applicability to HIV surveillance in resource-poor settings has yet to be established, further field trials are needed before a firm conclusion can be reached.
Critical behavior of the XY-rotor model on regular and small-world networks
NASA Astrophysics Data System (ADS)
De Nigris, Sarah; Leoncini, Xavier
2013-07-01
We study the XY rotors model on small networks whose number of links scales with the system size Nlinks˜Nγ, where 1≤γ≤2. We first focus on regular one-dimensional rings in the microcanonical ensemble. For γ<1.5 the model behaves like a short-range one and no phase transition occurs. For γ>1.5, the system equilibrium properties are found to be identical to the mean field, which displays a second-order phase transition at a critical energy density ɛ=E/N,ɛc=0.75. Moreover, for γc≃1.5 we find that a nontrivial state emerges, characterized by an infinite susceptibility. We then consider small-world networks, using the Watts-Strogatz mechanism on the regular networks parametrized by γ. We first analyze the topology and find that the small-world regime appears for rewiring probabilities which scale as pSW∝1/Nγ. Then considering the XY-rotors model on these networks, we find that a second-order phase transition occurs at a critical energy ɛc which logarithmically depends on the topological parameters p and γ. We also define a critical probability pMF, corresponding to the probability beyond which the mean field is quantitatively recovered, and we analyze its dependence on γ.
Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.
NASA Technical Reports Server (NTRS)
Kelman, G. R.
1972-01-01
There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.
Wavelength band selection method for multispectral target detection.
Karlholm, Jörgen; Renhorn, Ingmar
2002-11-10
A framework is proposed for the selection of wavelength bands for multispectral sensors by use of hyperspectral reference data. Using the results from the detection theory we derive a cost function that is minimized by a set of spectral bands optimal in terms of detection performance for discrimination between a class of small rare targets and clutter with known spectral distribution. The method may be used, e.g., in the design of multispectral infrared search and track and electro-optical missile warning sensors, where a low false-alarm rate and a high-detection probability for detection of small targets against a clutter background are of critical importance, but the required high frame rate prevents the use of hyperspectral sensors.
Analyses of exobiological and potential resource materials in the Martian soil.
Mancinelli, R L; Marshall, J R; White, M R
1992-01-01
Potential Martian soil components relevant to exobiology include water, organic matter, evaporites, clays, and oxides. These materials are also resources for human expeditions to Mars. When found in particular combinations, some of these materials constitute diagnostic paleobiomarker suites, allowing insight to be gained into the probability of life originating on Mars. Critically important to exobiology is the method of data analysis and data interpretation. To that end we are investigating methods of analysis of potential biomarker and paleobiomarker compounds and resource materials in soils and rocks pertinent to Martian geology. Differential thermal analysis coupled with gas chromatography is shown to be a highly useful analytical technique for detecting this wide and complex variety of materials.
Analyses of exobiological and potential resource materials in the Martian soil
NASA Technical Reports Server (NTRS)
Mancinelli, Rocco L.; Marshall, John R.; White, Melisa R.
1992-01-01
Potential Martian soil components relevant to exobiology include water, organic matter, evaporites, clays, and oxides. These materials are also resources for human expeditions to Mars. When found in particular combinations, some of these materials constitute diagnostic paleobiomarker suites, allowing insight to be gained into the probability of life originating on Mars. Critically important to exobiology is the method of data analysis and data interpretation. To that end, methods of analysis of potential biomarker and paleobiomarker compounds and resource materials in soils and rocks pertinent to Martian geology are investigated. Differential thermal analysis coupled with gas chromotography is shown to be a highly useful analytical technique for detecting this wide and complex variety of materials.
Gong, Xingchu; Chen, Huali; Chen, Teng; Qu, Haibin
2014-01-01
Quality by design (QbD) concept is a paradigm for the improvement of botanical injection quality control. In this work, water precipitation process for the manufacturing of Xueshuantong injection, a botanical injection made from Notoginseng Radix et Rhizoma, was optimized using a design space approach as a sample. Saponin recovery and total saponin purity (TSP) in supernatant were identified as the critical quality attributes (CQAs) of water precipitation using a risk assessment for all the processes of Xueshuantong injection. An Ishikawa diagram and experiments of fractional factorial design were applied to determine critical process parameters (CPPs). Dry matter content of concentrated extract (DMCC), amount of water added (AWA), and stirring speed (SS) were identified as CPPs. Box-Behnken designed experiments were carried out to develop models between CPPs and process CQAs. Determination coefficients were higher than 0.86 for all the models. High TSP in supernatant can be obtained when DMCC is low and SS is high. Saponin recoveries decreased as DMCC increased. Incomplete collection of supernatant was the main reason for the loss of saponins. Design space was calculated using a Monte-Carlo simulation method with acceptable probability of 0.90. Recommended normal operation region are located in DMCC of 0.38-0.41 g/g, AWA of 3.7-4.9 g/g, and SS of 280-350 rpm, with a probability more than 0.919 to attain CQA criteria. Verification experiment results showed that operating DMCC, SS, and AWA within design space can attain CQA criteria with high probability.
Danevska, Lenche; Spiroski, Mirko; Donev, Doncho; Pop-Jordanova, Nada; Polenakovic, Momir
2016-11-01
The Internet has enabled an easy method to search through the vast majority of publications and has improved the impact of scholarly journals. However, it can also pose threats to the quality of published articles. New publishers and journals have emerged so-called open-access potential, possible, or probable predatory publishers and journals, and so-called hijacked journals. It was our aim to increase the awareness and warn scholars, especially young researchers, how to recognize these journals and how to avoid submission of their papers to these journals. Review and critical analysis of the relevant published literature, Internet sources and personal experience, thoughts, and observations of the authors. The web blog of Jeffrey Beall, University of Colorado, was greatly consulted. Jeffrey Beall is a Denver academic librarian who regularly maintains two lists: the first one, of potential, possible, or probable predatory publishers and the second one, of potential, possible, or probable predatory standalone journals. Aspects related to this topic presented by other authors have been discussed as well. Academics should bear in mind how to differentiate between trustworthy and reliable journals and predatory ones, considering: publication ethics, peer-review process, international academic standards, indexing and abstracting, preservation in digital repositories, metrics, sustainability, etc.
The diversity effect of inductive reasoning under segment manipulation of complex cognition.
Chen, Antao; Li, Hong; Feng, Tingyong; Gao, Xuemei; Zhang, Zhongming; Li, Fuhong; Yang, Dong
2005-12-01
The present study proposed the idea of segment manipulation of complex cognition (SMCC), and technically made it possible the quantitative treatment and systematical manipulation on the premise diversity. The segment manipulation of complex cognition divides the previous inductive strengths judgment task into three distinct steps, attempting to particularly distinguish the psychological processes and their rules. The results in Experiment 1 showed that compared with the traditional method, the quantitative treatment and systematical manipulation of SMCC on the diversity did not change the task's nature, and remain rational and a good measurement of inductive strength judgment. The results in Experiment 2 showed that the participants' response rules in the triple-step task were expected from our proposal, and that in Step 2 the "feeling of surprise" (FOS), which seems implausible but predicted from the diversity premises, was measured, and its component might be the critical part that produced the diversity effect. The "feeling of surprise" may reflect the impact of emotion on cognition, representing a strong revision to premise probability principle of pure rational hypothesis proposed by Lo et al., and its roles in the diversity effect are worthy of further research. In this regards were discussed the mistakes that the premise probability principle makes when it takes posterity probability as prior probability.
Skerjanc, William F.; Maki, John T.; Collin, Blaise P.; ...
2015-12-02
The success of modular high temperature gas-cooled reactors is highly dependent on the performance of the tristructural-isotopic (TRISO) coated fuel particle and the quality to which it can be manufactured. During irradiation, TRISO-coated fuel particles act as a pressure vessel to contain fission gas and mitigate the diffusion of fission products to the coolant boundary. The fuel specifications place limits on key attributes to minimize fuel particle failure under irradiation and postulated accident conditions. PARFUME (an integrated mechanistic coated particle fuel performance code developed at the Idaho National Laboratory) was used to calculate fuel particle failure probabilities. By systematically varyingmore » key TRISO-coated particle attributes, failure probability functions were developed to understand how each attribute contributes to fuel particle failure. Critical manufacturing limits were calculated for the key attributes of a low enriched TRISO-coated nuclear fuel particle with a kernel diameter of 425 μm. As a result, these critical manufacturing limits identify ranges beyond where an increase in fuel particle failure probability is expected to occur.« less
Real-time segmentation of burst suppression patterns in critical care EEG monitoring
Westover, M. Brandon; Shafi, Mouhsin M.; Ching, ShiNung; Chemali, Jessica J.; Purdon, Patrick L.; Cash, Sydney S.; Brown, Emery N.
2014-01-01
Objective Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. Methods A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Results Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Conclusions Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Significance Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. PMID:23891828
Cluster analysis as a prediction tool for pregnancy outcomes.
Banjari, Ines; Kenjerić, Daniela; Šolić, Krešimir; Mandić, Milena L
2015-03-01
Considering specific physiology changes during gestation and thinking of pregnancy as a "critical window", classification of pregnant women at early pregnancy can be considered as crucial. The paper demonstrates the use of a method based on an approach from intelligent data mining, cluster analysis. Cluster analysis method is a statistical method which makes possible to group individuals based on sets of identifying variables. The method was chosen in order to determine possibility for classification of pregnant women at early pregnancy to analyze unknown correlations between different variables so that the certain outcomes could be predicted. 222 pregnant women from two general obstetric offices' were recruited. The main orient was set on characteristics of these pregnant women: their age, pre-pregnancy body mass index (BMI) and haemoglobin value. Cluster analysis gained a 94.1% classification accuracy rate with three branch- es or groups of pregnant women showing statistically significant correlations with pregnancy outcomes. The results are showing that pregnant women both of older age and higher pre-pregnancy BMI have a significantly higher incidence of delivering baby of higher birth weight but they gain significantly less weight during pregnancy. Their babies are also longer, and these women have significantly higher probability for complications during pregnancy (gestosis) and higher probability of induced or caesarean delivery. We can conclude that the cluster analysis method can appropriately classify pregnant women at early pregnancy to predict certain outcomes.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
The α‐synuclein gene in multiple system atrophy
Ozawa, T; Healy, D G; Abou‐Sleiman, P M; Ahmadi, K R; Quinn, N; Lees, A J; Shaw, K; Wullner, U; Berciano, J; Moller, J C; Kamm, C; Burk, K; Josephs, K A; Barone, P; Tolosa, E; Goldstein, D B; Wenning, G; Geser, F; Holton, J L; Gasser, T; Revesz, T; Wood, N W
2006-01-01
Background The formation of α‐synuclein aggregates may be a critical event in the pathogenesis of multiple system atrophy (MSA). However, the role of this gene in the aetiology of MSA is unknown and untested. Method The linkage disequilibrium (LD) structure of the α‐synuclein gene was established and LD patterns were used to identify a set of tagging single nucleotide polymorphisms (SNPs) that represent 95% of the haplotype diversity across the entire gene. The effect of polymorphisms on the pathological expression of MSA in pathologically confirmed cases was also evaluated. Results and conclusion In 253 Gilman probable or definite MSA patients, 457 possible, probable, and definite MSA cases and 1472 controls, a frequency difference for the individual tagging SNPs or tag‐defined haplotypes was not detected. No effect was observed of polymorphisms on the pathological expression of MSA in pathologically confirmed cases. PMID:16543523
Stochastic analysis of a pulse-type prey-predator model
NASA Astrophysics Data System (ADS)
Wu, Y.; Zhu, W. Q.
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Stochastic analysis of a pulse-type prey-predator model.
Wu, Y; Zhu, W Q
2008-04-01
A stochastic Lotka-Volterra model, a so-called pulse-type model, for the interaction between two species and their random natural environment is investigated. The effect of a random environment is modeled as random pulse trains in the birth rate of the prey and the death rate of the predator. The generalized cell mapping method is applied to calculate the probability distributions of the species populations at a state of statistical quasistationarity. The time evolution of the population densities is studied, and the probability of the near extinction time, from an initial state to a critical state, is obtained. The effects on the ecosystem behaviors of the prey self-competition term and of the pulse mean arrival rate are also discussed. Our results indicate that the proposed pulse-type model shows obviously distinguishable characteristics from a Gaussian-type model, and may confer a significant advantage for modeling the prey-predator system under discrete environmental fluctuations.
Percolation of spatially constrained Erdős-Rényi networks with degree correlations.
Schmeltzer, C; Soriano, J; Sokolov, I M; Rüdiger, S
2014-01-01
Motivated by experiments on activity in neuronal cultures [ J. Soriano, M. Rodríguez Martínez, T. Tlusty and E. Moses Proc. Natl. Acad. Sci. 105 13758 (2008)], we investigate the percolation transition and critical exponents of spatially embedded Erdős-Rényi networks with degree correlations. In our model networks, nodes are randomly distributed in a two-dimensional spatial domain, and the connection probability depends on Euclidian link length by a power law as well as on the degrees of linked nodes. Generally, spatial constraints lead to higher percolation thresholds in the sense that more links are needed to achieve global connectivity. However, degree correlations favor or do not favor percolation depending on the connectivity rules. We employ two construction methods to introduce degree correlations. In the first one, nodes stay homogeneously distributed and are connected via a distance- and degree-dependent probability. We observe that assortativity in the resulting network leads to a decrease of the percolation threshold. In the second construction methods, nodes are first spatially segregated depending on their degree and afterwards connected with a distance-dependent probability. In this segregated model, we find a threshold increase that accompanies the rising assortativity. Additionally, when the network is constructed in a disassortative way, we observe that this property has little effect on the percolation transition.
A Defence of the AR4’s Bayesian Approach to Quantifying Uncertainty
NASA Astrophysics Data System (ADS)
Vezer, M. A.
2009-12-01
The field of climate change research is a kimberlite pipe filled with philosophic diamonds waiting to be mined and analyzed by philosophers. Within the scientific literature on climate change, there is much philosophical dialogue regarding the methods and implications of climate studies. To this date, however, discourse regarding the philosophy of climate science has been confined predominately to scientific - rather than philosophical - investigations. In this paper, I hope to bring one such issue to the surface for explicit philosophical analysis: The purpose of this paper is to address a philosophical debate pertaining to the expressions of uncertainty in the International Panel on Climate Change (IPCC) Fourth Assessment Report (AR4), which, as will be noted, has received significant attention in scientific journals and books, as well as sporadic glances from the popular press. My thesis is that the AR4’s Bayesian method of uncertainty analysis and uncertainty expression is justifiable on pragmatic grounds: it overcomes problems associated with vagueness, thereby facilitating communication between scientists and policy makers such that the latter can formulate decision analyses in response to the views of the former. Further, I argue that the most pronounced criticisms against the AR4’s Bayesian approach, which are outlined below, are misguided. §1 Introduction Central to AR4 is a list of terms related to uncertainty that in colloquial conversations would be considered vague. The IPCC attempts to reduce the vagueness of its expressions of uncertainty by calibrating uncertainty terms with numerical probability values derived from a subjective Bayesian methodology. This style of analysis and expression has stimulated some controversy, as critics reject as inappropriate and even misleading the association of uncertainty terms with Bayesian probabilities. [...] The format of the paper is as follows. The investigation begins (§2) with an explanation of background considerations relevant to the IPCC and its use of uncertainty expressions. It then (§3) outlines some general philosophical worries regarding vague expressions and (§4) relates those worries to the AR4 and its method of dealing with them, which is a subjective Bayesian probability analysis. The next phase of the paper (§5) examines the notions of ‘objective’ and ‘subjective’ probability interpretations and compares the IPCC’s subjective Bayesian strategy with a frequentist approach. It then (§6) addresses objections to that methodology, and concludes (§7) that those objections are wrongheaded.
Red List of vascular plants of the Wadden Sea Area
NASA Astrophysics Data System (ADS)
Wind, P.; van der Ende, M.; Garve, E.; Schacherer, A.; Thissen, J. B. M.
1996-10-01
In the Wadden Sea area, a total of 248 (sub)species of vascular plants are threatened in at least one subregion. Of these, 216 (sub)species are threatened in the entire area and are therefore placed on the trialteral Red List. 17 (sub)species of the listed vascular plants are (probably) extinct in the entire Wadden Sea area. The status of 47 (sub)species of vascular plants is (probably) critical; 61 (sub)species are (probably) endangered; the status of 65 (sub)species is (probably) vulnerable and that of 26 (sub)species susceptible.
Servanty, Sabrina; Converse, Sarah J.; Bailey, Larissa L.
2014-01-01
The reintroduction of threatened and endangered species is now a common method for reestablishing populations. Typically, a fundamental objective of reintroduction is to establish a self-sustaining population. Estimation of demographic parameters in reintroduced populations is critical, as these estimates serve multiple purposes. First, they support evaluation of progress toward the fundamental objective via construction of population viability analyses (PVAs) to predict metrics such as probability of persistence. Second, PVAs can be expanded to support evaluation of management actions, via management modeling. Third, the estimates themselves can support evaluation of the demographic performance of the reintroduced population, e.g., via comparison with wild populations. For each of these purposes, thorough treatment of uncertainties in the estimates is critical. Recently developed statistical methods - namely, hierarchical Bayesian implementations of state-space models - allow for effective integration of different types of uncertainty in estimation. We undertook a demographic estimation effort for a reintroduced population of endangered whooping cranes with the purpose of ultimately developing a Bayesian PVA for determining progress toward establishing a self-sustaining population, and for evaluating potential management actions via a Bayesian PVA-based management model. We evaluated individual and temporal variation in demographic parameters based upon a multi-state mark-recapture model. We found that survival was relatively high across time and varied little by sex. There was some indication that survival varied by release method. Survival was similar to that observed in the wild population. Although overall reproduction in this reintroduced population is poor, birds formed social pairs when relatively young, and once a bird was in a social pair, it had a nearly 50% chance of nesting the following breeding season. Also, once a bird had nested, it had a high probability of nesting again. These results are encouraging considering that survival and reproduction have been major challenges in past reintroductions of this species. The demographic estimates developed will support construction of a management model designed to facilitate exploration of management actions of interest, and will provide critical guidance in future planning for this reintroduction. An approach similar to what we describe could be usefully applied to many reintroduced populations.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
Finite-size scaling of survival probability in branching processes
NASA Astrophysics Data System (ADS)
Garcia-Millan, Rosalba; Font-Clos, Francesc; Corral, Álvaro
2015-04-01
Branching processes pervade many models in statistical physics. We investigate the survival probability of a Galton-Watson branching process after a finite number of generations. We derive analytically the existence of finite-size scaling for the survival probability as a function of the control parameter and the maximum number of generations, obtaining the critical exponents as well as the exact scaling function, which is G (y ) =2 y ey /(ey-1 ) , with y the rescaled distance to the critical point. Our findings are valid for any branching process of the Galton-Watson type, independently of the distribution of the number of offspring, provided its variance is finite. This proves the universal behavior of the finite-size effects in branching processes, including the universality of the metric factors. The direct relation to mean-field percolation is also discussed.
NASA Astrophysics Data System (ADS)
Mendonça, J. R. G.
2018-04-01
We propose and investigate a one-parameter probabilistic mixture of one-dimensional elementary cellular automata under the guise of a model for the dynamics of a single-species unstructured population with nonoverlapping generations in which individuals have smaller probability of reproducing and surviving in a crowded neighbourhood but also suffer from isolation and dispersal. Remarkably, the first-order mean field approximation to the dynamics of the model yields a cubic map containing terms representing both logistic and weak Allee effects. The model has a single absorbing state devoid of individuals, but depending on the reproduction and survival probabilities can achieve a stable population. We determine the critical probability separating these two phases and find that the phase transition between them is in the directed percolation universality class of critical behaviour.
Red List of beetles of the Wadden Sea Area
NASA Astrophysics Data System (ADS)
Mahler, V.; Suikat, R.; Aßmann, Th.
1996-10-01
As no data on beetles in the Wadden Sea area are available from The Netherlands, the trilateral status of threat only refers to the Danish and German part of the Wadden Sea. In this area, in total, 238 species of beetles are threatened in at least one subregion. Of these, 189 species are threatened in the entire area and are therefore placed on the trilateral Red List. 4 species are (probably) extinct in the entire Wadden Sea area. The status of 24 species of beetles is (probably) critical, 46 species are (probably) endangered, the status of 86 species is (probably) vulnerable and of 29 species (probably) susceptible.
Probability Distributions of Minkowski Distances between Discrete Random Variables.
ERIC Educational Resources Information Center
Schroger, Erich; And Others
1993-01-01
Minkowski distances are used to indicate similarity of two vectors in an N-dimensional space. How to compute the probability function, the expectation, and the variance for Minkowski distances and the special cases City-block distance and Euclidean distance. Critical values for tests of significance are presented in tables. (SLD)
Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu
2018-05-01
In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Voga, Gorazd
2008-01-01
The measurement of pulmonary artery occlusion pressure (PAOP) is important for estimation of left ventricular filling pressure and for distinction between cardiac and non-cardiac etiology of pulmonary edema. Clinical assessment of PAOP, which relies on physical signs of pulmonary congestion, is uncertain. Reliable PAOP measurement can be performed by pulmonary artery catheter, but it is possible also by the use of echocardiography. Several Doppler variables show acceptable correlation with PAOP and can be used for its estimation in cardiac and critically ill patients. Noninvasive PAOP estimation should probably become an integral part of transthoracic and transesophageal echocardiographic evaluation in critically ill patients. However, the limitations of both methods should be taken into consideration, and in specific patients invasive PAOP measurement is still unavoidable, if the exact value of PAOP is needed. PMID:18394183
Probabilistic Risk Assessment for Astronaut Post Flight Bone Fracture
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Myers, Jerry; Licata, Angelo
2015-01-01
Introduction: Space flight potentially reduces the loading that bone can resist before fracture. This reduction in bone integrity may result from a combination of factors, the most common reported as reduction in astronaut BMD. Although evaluating the condition of bones continues to be a critical aspect of understanding space flight fracture risk, defining the loading regime, whether on earth, in microgravity, or in reduced gravity on a planetary surface, remains a significant component of estimating the fracture risks to astronauts. This presentation summarizes the concepts, development, and application of NASA's Bone Fracture Risk Module (BFxRM) to understanding pre-, post, and in mission astronaut bone fracture risk. The overview includes an assessment of contributing factors utilized in the BFxRM and illustrates how new information, such as biomechanics of space suit design or better understanding of post flight activities may influence astronaut fracture risk. Opportunities for the bone mineral research community to contribute to future model development are also discussed. Methods: To investigate the conditions in which spaceflight induced changes to bone plays a critical role in post-flight fracture probability, we implement a modified version of the NASA Bone Fracture Risk Model (BFxRM). Modifications included incorporation of variations in physiological characteristics, post-flight recovery rate, and variations in lateral fall conditions within the probabilistic simulation parameter space. The modeled fracture probability estimates for different loading scenarios at preflight and at 0 and 365 days post-flight time periods are compared. Results: For simple lateral side falls, mean post-flight fracture probability is elevated over mean preflight fracture probability due to spaceflight induced BMD loss and is not fully recovered at 365 days post-flight. In the case of more energetic falls, such as from elevated heights or with the addition of lateral movement, the contribution of space flight quality changes is much less clear, indicating more granular assessments, such as Finite Element modeling, may be needed to further assess the risks in these scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morise, A.P.; Duval, R.D.
To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less
Percolation critical polynomial as a graph invariant
Scullard, Christian R.
2012-10-18
Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less
Finding a fox: an evaluation of survey methods to estimate abundance of a small desert carnivore.
Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M
2014-01-01
The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore.
Finding a Fox: An Evaluation of Survey Methods to Estimate Abundance of a Small Desert Carnivore
Dempsey, Steven J.; Gese, Eric M.; Kluever, Bryan M.
2014-01-01
The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore. PMID:25148102
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
NASA Astrophysics Data System (ADS)
Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.
2012-12-01
Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.
Axionic landscape for Higgs coupling near-criticality
NASA Astrophysics Data System (ADS)
Cline, James M.; Espinosa, José R.
2018-02-01
The measured value of the Higgs quartic coupling λ is peculiarly close to the critical value above which the Higgs potential becomes unstable, when extrapolated to high scales by renormalization group running. It is tempting to speculate that there is an anthropic reason behind this near-criticality. We show how an axionic field can provide a landscape of vacuum states in which λ scans. These states are populated during inflation to create a multiverse with different quartic couplings, with a probability distribution P that can be computed. If P is peaked in the anthropically forbidden region of Higgs instability, then the most probable universe compatible with observers would be close to the boundary, as observed. We discuss three scenarios depending on the Higgs vacuum selection mechanism: decay by quantum tunneling, by thermal fluctuations, or by inflationary fluctuations.
Diversity of Rainfall Thresholds for early warning of hydro-geological disasters
NASA Astrophysics Data System (ADS)
De Luca, Davide L.; Versace, Pasquale
2017-06-01
For early warning of disasters induced by precipitation (such as floods and landslides), different kinds of rainfall thresholds are adopted, which vary from each other, on the basis on adopted hypotheses. In some cases, they represent the occurrence probability of an event (landslide or flood), in other cases the exceedance probability of a critical value for an assigned indicator I (a function of rainfall heights), and in further cases they only indicate the exceeding of a prefixed percentage a critical value for I, indicated as Icr. For each scheme, it is usual to define three different criticality levels (ordinary, moderate and severe), which are associated to warning levels, according to emergency plans. This work briefly discusses different schemes of rainfall thresholds, focusing attention on landslide prediction, with some applications to a real case study in Calabria region (southern Italy).
Is This Radical Enough? Curriculum Reform, Change, and the Language of Probability.
ERIC Educational Resources Information Center
Deever, Bryan
1996-01-01
Discusses the need for and construction of a language of probability to complement the extant languages of critique and possibility in critical curriculum theory. Actions framed in the language, and the ensuing dialogue, could then be forces in radical reform through a process of infiltration, appropriation, and reconfiguration. (SM)
We show that a conditional probability analysis that utilizes a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criterai from empirical data. The critical step in this approach is transforming the response ...
Phase transitions in community detection: A solvable toy model
NASA Astrophysics Data System (ADS)
Ver Steeg, Greg; Moore, Cristopher; Galstyan, Aram; Allahverdyan, Armen
2014-05-01
Recently, it was shown that there is a phase transition in the community detection problem. This transition was first computed using the cavity method, and has been proved rigorously in the case of q = 2 groups. However, analytic calculations using the cavity method are challenging since they require us to understand probability distributions of messages. We study analogous transitions in the so-called “zero-temperature inference” model, where this distribution is supported only on the most likely messages. Furthermore, whenever several messages are equally likely, we break the tie by choosing among them with equal probability, corresponding to an infinitesimal random external field. While the resulting analysis overestimates the thresholds, it reproduces some of the qualitative features of the system. It predicts a first-order detectability transition whenever q > 2 (as opposed to q > 4 according to the finite-temperature cavity method). It also has a regime analogous to the “hard but detectable” phase, where the community structure can be recovered, but only when the initial messages are sufficiently accurate. Finally, we study a semisupervised setting where we are given the correct labels for a fraction ρ of the nodes. For q > 2, we find a regime where the accuracy jumps discontinuously at a critical value of ρ.
NASA Astrophysics Data System (ADS)
Zeimetz, Fraenz; Schaefli, Bettina; Artigue, Guillaume; García Hernández, Javier; Schleiss, Anton J.
2017-08-01
Extreme floods are commonly estimated with the help of design storms and hydrological models. In this paper, we propose a new method to take into account the relationship between precipitation intensity (P) and air temperature (T) to account for potential snow accumulation and melt processes during the elaboration of design storms. The proposed method is based on a detailed analysis of this P-T relationship in the Swiss Alps. The region, no upper precipitation intensity limit is detectable for increasing temperature. However, a relationship between the highest measured temperature before a precipitation event and the duration of the subsequent event could be identified. An explanation for this relationship is proposed here based on the temperature gradient measured before the precipitation events. The relevance of these results is discussed for an example of Probable Maximum Precipitation-Probable Maximum Flood (PMP-PMF) estimation for the high mountainous Mattmark dam catchment in the Swiss Alps. The proposed method to associate a critical air temperature to a PMP is easily transposable to similar alpine settings where meteorological soundings as well as ground temperature and precipitation measurements are available. In the future, the analyses presented here might be further refined by distinguishing between precipitation event types (frontal versus orographic).
Dangerous "spin": the probability myth of evidence-based prescribing - a Merleau-Pontyian approach.
Morstyn, Ron
2011-08-01
The aim of this study was to examine logical positivist statistical probability statements used to support and justify "evidence-based" prescribing rules in psychiatry when viewed from the major philosophical theories of probability, and to propose "phenomenological probability" based on Maurice Merleau-Ponty's philosophy of "phenomenological positivism" as a better clinical and ethical basis for psychiatric prescribing. The logical positivist statistical probability statements which are currently used to support "evidence-based" prescribing rules in psychiatry have little clinical or ethical justification when subjected to critical analysis from any of the major theories of probability and represent dangerous "spin" because they necessarily exclude the individual , intersubjective and ambiguous meaning of mental illness. A concept of "phenomenological probability" founded on Merleau-Ponty's philosophy of "phenomenological positivism" overcomes the clinically destructive "objectivist" and "subjectivist" consequences of logical positivist statistical probability and allows psychopharmacological treatments to be appropriately integrated into psychiatric treatment.
System Analysis by Mapping a Fault-tree into a Bayesian-network
NASA Astrophysics Data System (ADS)
Sheng, B.; Deng, C.; Wang, Y. H.; Tang, L. H.
2018-05-01
In view of the limitations of fault tree analysis in reliability assessment, Bayesian Network (BN) has been studied as an alternative technology. After a brief introduction to the method for mapping a Fault Tree (FT) into an equivalent BN, equations used to calculate the structure importance degree, the probability importance degree and the critical importance degree are presented. Furthermore, the correctness of these equations is proved mathematically. Combining with an aircraft landing gear’s FT, an equivalent BN is developed and analysed. The results show that richer and more accurate information have been achieved through the BN method than the FT, which demonstrates that the BN is a superior technique in both reliability assessment and fault diagnosis.
Magnetic field dependence of spin torque switching in nanoscale magnetic tunnel junctions
NASA Astrophysics Data System (ADS)
Yang, Liu; Rowlands, Graham; Katine, Jordan; Langer, Juergen; Krivorotov, Ilya
2012-02-01
Magnetic random access memory based on spin transfer torque effect in nanoscale magnetic tunnel junctions (STT-RAM) is emerging as a promising candidate for embedded and stand-alone computer memory. An important performance parameter of STT-RAM is stability of its free magnetic layer against thermal fluctuations. Measurements of the free layer switching probability as a function of sub-critical voltage at zero effective magnetic field (read disturb rate or RDR measurements) have been proposed as a method for quantitative evaluation of the free layer thermal stability at zero voltage. In this presentation, we report RDR measurement as a function of external magnetic field, which provide a test of the RDR method self-consistency and reliability.
Kawamoto, Hirokazu; Takayasu, Hideki; Jensen, Henrik Jeldtoft; Takayasu, Misako
2015-01-01
Through precise numerical analysis, we reveal a new type of universal loopless percolation transition in randomly removed complex networks. As an example of a real-world network, we apply our analysis to a business relation network consisting of approximately 3,000,000 links among 300,000 firms and observe the transition with critical exponents close to the mean-field values taking into account the finite size effect. We focus on the largest cluster at the critical point, and introduce survival probability as a new measure characterizing the robustness of each node. We also discuss the relation between survival probability and k-shell decomposition.
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
Reliably detectable flaw size for NDE methods that use calibration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Role of conviction in nonequilibrium models of opinion formation
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Anteneodo, Celia
2012-12-01
We analyze the critical behavior of a class of discrete opinion models in the presence of disorder. Within this class, each agent opinion takes a discrete value (±1 or 0) and its time evolution is ruled by two terms, one representing agent-agent interactions and the other the degree of conviction or persuasion (a self-interaction). The mean-field limit, where each agent can interact evenly with any other, is considered. Disorder is introduced in the strength of both interactions, with either quenched or annealed random variables. With probability p (1-p), a pairwise interaction reflects a negative (positive) coupling, while the degree of conviction also follows a binary probability distribution (two different discrete probability distributions are considered). Numerical simulations show that a nonequilibrium continuous phase transition, from a disordered state to a state with a prevailing opinion, occurs at a critical point pc that depends on the distribution of the convictions, with the transition being spoiled in some cases. We also show how the critical line, for each model, is affected by the update scheme (either parallel or sequential) as well as by the kind of disorder (either quenched or annealed).
Taillefumier, Thibaud; Magnasco, Marcelo O
2013-04-16
Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss-Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.
A simple model for the critical mass of a nuclear weapon
NASA Astrophysics Data System (ADS)
Reed, B. Cameron
2018-07-01
A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.
National Institute of Standards and Technology Data Gateway
SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access) This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.
Critical 2-D Percolation: Crossing Probabilities, Modular Forms and Factorization
NASA Astrophysics Data System (ADS)
Kleban, Peter
2007-03-01
We first consider crossing probabilities in critical 2-D percolation in rectangular geometries, derived via conformal field theory. These quantities are shown to exhibit interesting modular behavior [1], although the physical meaning of modular transformations in this context is not clear. We show that in many cases these functions are completely characterized by very simple transformation properties. In particular, Cardy's function for the percolation crossing probability (including the conformal dimension 1/3), follows from a simple modular argument. We next consider the probability of crossing between various points for percolation in the upper half-plane. For two points, with the point x an edge of the system, the probability is P(x,z)= k 1y^5/48 φ(x,z)^1/3 where φ is the potential at z of a 2-D dipole located at x, and k is a non-universal constant. For three points, one finds the exact and universal factorization [2,3] P(x1,x2,z)= C ; √P(x1,z)P(x2,z)P(x1,x2) with C= 8 √2; &5/2circ;3^3/4 ; γ(1/3)^9/2. These results are calculated by use of conformal field theory. Computer simulations verify them very precisely. Furthermore, simulations show that the same factorization holds asymptotically, with the same value of C, when one or both of the points xi are moved from the edge into the bulk.1.) Peter Kleban and Don Zagier, Crossing probabilities and modular forms, J. Stat. Phys. 113, 431-454 (2003) [arXiv: math-ph/0209023].2.) Peter Kleban, Jacob J. H. Simmons, and Robert M. Ziff, Anchored critical percolation clusters and 2-d electrostatics, Phys. Rev. Letters 97,115702 (2006) [arXiv: cond-mat/0605120].3.) Jacob J. H. Simmons and Peter Kleban, in preparation.
Risk Assessment Methodology Based on the NISTIR 7628 Guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Sheldon, Frederick T; Hauser, Katie R
2013-01-01
Earlier work describes computational models of critical infrastructure that allow an analyst to estimate the security of a system in terms of the impact of loss per stakeholder resulting from security breakdowns. Here, we consider how to identify, monitor and estimate risk impact and probability for different smart grid stakeholders. Our constructive method leverages currently available standards and defined failure scenarios. We utilize the National Institute of Standards and Technology (NIST) Interagency or Internal Reports (NISTIR) 7628 as a basis to apply Cyberspace Security Econometrics system (CSES) for comparing design principles and courses of action in making security-related decisions.
Kempen, John H.; Gangaputra, Sapna; Daniel, Ebenezer; Levy-Clarke, Grace A.; Nussenblatt, Robert B.; Rosenbaum, James T.; Suhler, Eric B.; Thorne, Jennifer E.; Foster, C. Stephen; Jabs, Douglas A.; Helzlsouer, Kathy J.
2008-01-01
Purpose To critically assess potentially carcinogenic effects of immunosuppressive therapy in the ocular inflammation setting Design Focused evidence assessment. Methods Relevant publications were identified by MEDLINE and EMBASE queries and reference list searches. Results Extrapolation from transplant, rheumatology, skin disease and inflammatory bowel disease cohorts to the ocular inflammation setting suggest that: 1) alkylating agents increase hematologic malignancy risk and cyclophosphamide increases bladder cancer risk, but less so with ≤18 months’ duration of therapy and hydration respectively; 2) calcineurin inhibitors and azathioprine probably do not increase total cancer risk to a detectable degree, except perhaps some other risk factors (uncommon in ocular inflammation patients) might interact with the former to raise risk; 3) Tumor Necrosis Factor (TNF) inhibitors may accelerate diagnosis of cancer in the first 6–12 months, but probably do not increase long-term cancer risk; and 4) changes in risk with methotrexate, mycophenolate mofetil, and daclizumab appear negligible although non-transplant data are limited for the latter agents. Immunosuppression in general may increase skin cancer risk in a sun-exposure dependent manner. Conclusion Use of alkylating agents for a limited duration seems justifiable for severe, vision-threatening disease, but otherwise cancer risk may be a relevant constraint on use of this approach. Antimetabolites, daclizumab, TNF-inhibitors, and calcineurin inhibitors probably do not increase cancer risk to a degree that outweighs the expected benefits of therapy. Monitoring for skin cancer may be useful for highly sun-exposed patients. Data from ocular inflammation patients are needed to confirm the conclusions made in this analysis by extrapolation. PMID:18579112
Moro, Marilyn; Westover, M Brandon; Kelly, Jessica; Bianchi, Matt T
2016-03-01
Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. © 2016 American Academy of Sleep Medicine.
Polynomial sequences for bond percolation critical thresholds
Scullard, Christian R.
2011-09-22
In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less
Huang, Weiqing; Fan, Hongbo; Qiu, Yongfu; Cheng, Zhiyu; Xu, Pingru; Qian, Yu
2016-05-01
Recently, China has frequently experienced large-scale, severe and persistent haze pollution due to surging urbanization and industrialization and a rapid growth in the number of motor vehicles and energy consumption. The vehicle emission due to the consumption of a large number of fossil fuels is no doubt a critical factor of the haze pollution. This work is focused on the causation mechanism of haze pollution related to the vehicle emission for Guangzhou city by employing the Fault Tree Analysis (FTA) method for the first time. With the establishment of the fault tree system of "Haze weather-Vehicle exhausts explosive emission", all of the important risk factors are discussed and identified by using this deductive FTA method. The qualitative and quantitative assessments of the fault tree system are carried out based on the structure, probability and critical importance degree analysis of the risk factors. The study may provide a new simple and effective tool/strategy for the causation mechanism analysis and risk management of haze pollution in China. Copyright © 2016 Elsevier Ltd. All rights reserved.
Crossover from isotropic to directed percolation
NASA Astrophysics Data System (ADS)
Zhou, Zongzheng; Yang, Ji; Ziff, Robert M.; Deng, Youjin
2012-08-01
We generalize the directed percolation (DP) model by relaxing the strict directionality of DP such that propagation can occur in either direction but with anisotropic probabilities. We denote the probabilities as p↓=ppd and p↑=p(1-pd), with p representing the average occupation probability and pd controlling the anisotropy. The Leath-Alexandrowicz method is used to grow a cluster from an active seed site. We call this model with two main growth directions biased directed percolation (BDP). Standard isotropic percolation (IP) and DP are the two limiting cases of the BDP model, corresponding to pd=1/2 and pd=0,1 respectively. In this work, besides IP and DP, we also consider the 1/2
Estimating extreme losses for the Florida Public Hurricane Model—part II
NASA Astrophysics Data System (ADS)
Gulati, Sneh; George, Florence; Hamid, Shahid
2018-02-01
Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.
Target intersection probabilities for parallel-line and continuous-grid types of search
McCammon, R.B.
1977-01-01
The expressions for calculating the probability of intersection of hidden targets of different sizes and shapes for parallel-line and continuous-grid types of search can be formulated by vsing the concept of conditional probability. When the prior probability of the orientation of a widden target is represented by a uniform distribution, the calculated posterior probabilities are identical with the results obtained by the classic methods of probability. For hidden targets of different sizes and shapes, the following generalizations about the probability of intersection can be made: (1) to a first approximation, the probability of intersection of a hidden target is proportional to the ratio of the greatest dimension of the target (viewed in plane projection) to the minimum line spacing of the search pattern; (2) the shape of the hidden target does not greatly affect the probability of the intersection when the largest dimension of the target is small relative to the minimum spacing of the search pattern, (3) the probability of intersecting a target twice for a particular type of search can be used as a lower bound if there is an element of uncertainty of detection for a particular type of tool; (4) the geometry of the search pattern becomes more critical when the largest dimension of the target equals or exceeds the minimum spacing of the search pattern; (5) for elongate targets, the probability of intersection is greater for parallel-line search than for an equivalent continuous square-grid search when the largest dimension of the target is less than the minimum spacing of the search pattern, whereas the opposite is true when the largest dimension exceeds the minimum spacing; (6) the probability of intersection for nonorthogonal continuous-grid search patterns is not greatly different from the probability of intersection for the equivalent orthogonal continuous-grid pattern when the orientation of the target is unknown. The probability of intersection for an elliptically shaped target can be approximated by treating the ellipse as intermediate between a circle and a line. A search conducted along a continuous rectangular grid can be represented as intermediate between a search along parallel lines and along a continuous square grid. On this basis, an upper and lower bound for the probability of intersection of an elliptically shaped target for a continuous rectangular grid can be calculated. Charts have been constructed that permit the values for these probabilities to be obtained graphically. The use of conditional probability allows the explorationist greater flexibility in considering alternate search strategies for locating hidden targets. ?? 1977 Plenum Publishing Corp.
Estimating adult sex ratios in nature.
Ancona, Sergio; Dénes, Francisco V; Krüger, Oliver; Székely, Tamás; Beissinger, Steven R
2017-09-19
Adult sex ratio (ASR, the proportion of males in the adult population) is a central concept in population and evolutionary biology, and is also emerging as a major factor influencing mate choice, pair bonding and parental cooperation in both human and non-human societies. However, estimating ASR is fraught with difficulties stemming from the effects of spatial and temporal variation in the numbers of males and females, and detection/capture probabilities that differ between the sexes. Here, we critically evaluate methods for estimating ASR in wild animal populations, reviewing how recent statistical advances can be applied to handle some of these challenges. We review methods that directly account for detection differences between the sexes using counts of unmarked individuals (observed, trapped or killed) and counts of marked individuals using mark-recapture models. We review a third class of methods that do not directly sample the number of males and females, but instead estimate the sex ratio indirectly using relationships that emerge from demographic measures, such as survival, age structure, reproduction and assumed dynamics. We recommend that detection-based methods be used for estimating ASR in most situations, and point out that studies are needed that compare different ASR estimation methods and control for sex differences in dispersal.This article is part of the themed issue 'Adult sex ratios and reproductive decisions: a critical re-examination of sex differences in human and animal societies'. © 2017 The Author(s).
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
Assessing groundwater quality for irrigation using indicator kriging method
NASA Astrophysics Data System (ADS)
Delbari, Masoomeh; Amiri, Meysam; Motlagh, Masoud Bahraini
2016-11-01
One of the key parameters influencing sprinkler irrigation performance is water quality. In this study, the spatial variability of groundwater quality parameters (EC, SAR, Na+, Cl-, HCO3 - and pH) was investigated by geostatistical methods and the most suitable areas for implementation of sprinkler irrigation systems in terms of water quality are determined. The study was performed in Fasa county of Fars province using 91 water samples. Results indicated that all parameters are moderately to strongly spatially correlated over the study area. The spatial distribution of pH and HCO3 - was mapped using ordinary kriging. The probability of concentrations of EC, SAR, Na+ and Cl- exceeding a threshold limit in groundwater was obtained using indicator kriging (IK). The experimental indicator semivariograms were often fitted well by a spherical model for SAR, EC, Na+ and Cl-. For HCO3 - and pH, an exponential model was fitted to the experimental semivariograms. Probability maps showed that the risk of EC, SAR, Na+ and Cl- exceeding the given critical threshold is higher in lower half of the study area. The most proper agricultural lands for sprinkler irrigation implementation were identified by evaluating all probability maps. The suitable areas for sprinkler irrigation design were determined to be 25,240 hectares, which is about 34 percent of total agricultural lands and are located in northern and eastern parts. Overall the results of this study showed that IK is an appropriate approach for risk assessment of groundwater pollution, which is useful for a proper groundwater resources management.
Graph edit distance from spectral seriation.
Robles-Kelly, Antonio; Hancock, Edwin R
2005-03-01
This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092
An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.
Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John
2015-01-01
The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.
Cellular automata models for diffusion of information and highway traffic flow
NASA Astrophysics Data System (ADS)
Fuks, Henryk
In the first part of this work we study a family of deterministic models for highway traffic flow which generalize cellular automaton rule 184. This family is parameterized by the speed limit m and another parameter k that represents degree of 'anticipatory driving'. We compare two driving strategies with identical maximum throughput: 'conservative' driving with high speed limit and 'anticipatory' driving with low speed limit. Those two strategies are evaluated in terms of accident probability. We also discuss fundamental diagrams of generalized traffic rules and examine limitations of maximum achievable throughput. Possible modifications of the model are considered. For rule 184, we present exact calculations of the order parameter in a transition from the moving phase to the jammed phase using the method of preimage counting, and use this result to construct a solution to the density classification problem. In the second part we propose a probabilistic cellular automaton model for the spread of innovations, rumors, news, etc., in a social system. We start from simple deterministic models, for which exact expressions for the density of adopters are derived. For a more realistic model, based on probabilistic cellular automata, we study the influence of a range of interaction R on the shape of the adoption curve. When the probability of adoption is proportional to the local density of adopters, and individuals can drop the innovation with some probability p, the system exhibits a second order phase transition. Critical line separating regions of parameter space in which asymptotic density of adopters is positive from the region where it is equal to zero converges toward the mean-field line when the range of the interaction increases. In a region between R=1 critical line and the mean-field line asymptotic density of adopters depends on R, becoming zero if R is too small (smaller than some critical value). This result demonstrates the importance of connectivity in diffusion of information. We also define a new class of automata networks which incorporates non-local interactions, and discuss its applicability in modeling of diffusion of innovations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhshandeh, Mohsen; Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir; Mahdavi, Seied Rabi Mehdi
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-basedmore » treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication.« less
Design flood estimation in ungauged basins: probabilistic extension of the design-storm concept
NASA Astrophysics Data System (ADS)
Berk, Mario; Špačková, Olga; Straub, Daniel
2016-04-01
Design flood estimation in ungauged basins is an important hydrological task, which is in engineering practice typically solved with the design storm concept. However, neglecting the uncertainty in the hydrological response of the catchment through the assumption of average-recurrence-interval (ARI) neutrality between rainfall and runoff can lead to flawed design flood estimates. Additionally, selecting a single critical rainfall duration neglects the contribution of other rainfall durations on the probability of extreme flood events. In this study, the design flood problem is approached with concepts from structural reliability that enable a consistent treatment of multiple uncertainties in estimating the design flood. The uncertainty of key model parameters are represented probabilistically and the First-Order Reliability Method (FORM) is used to compute the flood exceedance probability. As an important by-product, the FORM analysis provides the most likely parameter combination to lead to a flood with a certain exceedance probability; i.e. it enables one to find representative scenarios for e.g., a 100 year or a 1000 year flood. Possible different rainfall durations are incorporated by formulating the event of a given design flood as a series system. The method is directly applicable in practice, since for the description of the rainfall depth-duration characteristics, the same inputs as for the classical design storm methods are needed, which are commonly provided by meteorological services. The proposed methodology is applied to a case study of Trauchgauer Ach catchment in Bavaria, SCS Curve Number (CN) and Unit hydrograph models are used for modeling the hydrological process. The results indicate, in accordance with past experience, that the traditional design storm concept underestimates design floods.
Wei, Wei; Larrey-Lassalle, Pyrène; Faure, Thierry; Dumoulin, Nicolas; Roux, Philippe; Mathias, Jean-Denis
2016-03-01
Comparative decision making process is widely used to identify which option (system, product, service, etc.) has smaller environmental footprints and for providing recommendations that help stakeholders take future decisions. However, the uncertainty problem complicates the comparison and the decision making. Probability-based decision support in LCA is a way to help stakeholders in their decision-making process. It calculates the decision confidence probability which expresses the probability of a option to have a smaller environmental impact than the one of another option. Here we apply the reliability theory to approximate the decision confidence probability. We compare the traditional Monte Carlo method with a reliability method called FORM method. The Monte Carlo method needs high computational time to calculate the decision confidence probability. The FORM method enables us to approximate the decision confidence probability with fewer simulations than the Monte Carlo method by approximating the response surface. Moreover, the FORM method calculates the associated importance factors that correspond to a sensitivity analysis in relation to the probability. The importance factors allow stakeholders to determine which factors influence their decision. Our results clearly show that the reliability method provides additional useful information to stakeholders as well as it reduces the computational time.
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
How to perform a critically appraised topic: part 2, appraise, evaluate, generate, and recommend.
Kelly, Aine Marie; Cronin, Paul
2011-11-01
This article continues the discussion of a critically appraised topic started in Part 1. A critically appraised topic is a practical tool for learning and applying critical appraisal skills. This article outlines steps 4-7 involved in performing a critically appraised topic for studies of diagnostic tests: Appraise, Appraise the literature; Evaluate, evaluate the strength of the evidence from the literature; Generate, generate graphs of conditional probability; and Recommend, draw conclusions and make recommendations. For steps 4-7 of performing a critically appraised topic, the main study results are summarized and translated into clinically useful measures of accuracy, efficacy, or risk.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
Kawamoto, Hirokazu; Takayasu, Hideki; Jensen, Henrik Jeldtoft; Takayasu, Misako
2015-01-01
Through precise numerical analysis, we reveal a new type of universal loopless percolation transition in randomly removed complex networks. As an example of a real-world network, we apply our analysis to a business relation network consisting of approximately 3,000,000 links among 300,000 firms and observe the transition with critical exponents close to the mean-field values taking into account the finite size effect. We focus on the largest cluster at the critical point, and introduce survival probability as a new measure characterizing the robustness of each node. We also discuss the relation between survival probability and k-shell decomposition. PMID:25885791
Active controls technology to maximize structural efficiency
NASA Technical Reports Server (NTRS)
Hoy, J. M.; Arnold, J. M.
1978-01-01
The implication of the dependence on active controls technology during the design phase of transport structures is considered. Critical loading conditions are discussed along with probable ways of alleviating these loads. Why fatigue requirements may be critical and can only be partially alleviated is explained. The significance of certain flutter suppression system criteria is examined.
Information Entropy Analysis of the H1N1 Genetic Code
NASA Astrophysics Data System (ADS)
Martwick, Andy
2010-03-01
During the current H1N1 pandemic, viral samples are being obtained from large numbers of infected people world-wide and are being sequenced on the NCBI Influenza Virus Resource Database. The information entropy of the sequences was computed from the probability of occurrence of each nucleotide base at every position of each set of sequences using Shannon's definition of information entropy, [ H=∑bpb,2( 1pb ) ] where H is the observed information entropy at each nucleotide position and pb is the probability of the base pair of the nucleotides A, C, G, U. Information entropy of the current H1N1 pandemic is compared to reference human and swine H1N1 entropy. As expected, the current H1N1 entropy is in a low entropy state and has a very large mutation potential. Using the entropy method in mature genes we can identify low entropy regions of nucleotides that generally correlate to critical protein function.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Real-time segmentation of burst suppression patterns in critical care EEG monitoring.
Brandon Westover, M; Shafi, Mouhsin M; Ching, Shinung; Chemali, Jessica J; Purdon, Patrick L; Cash, Sydney S; Brown, Emery N
2013-09-30
Develop a real-time algorithm to automatically discriminate suppressions from non-suppressions (bursts) in electroencephalograms of critically ill adult patients. A real-time method for segmenting adult ICU EEG data into bursts and suppressions is presented based on thresholding local voltage variance. Results are validated against manual segmentations by two experienced human electroencephalographers. We compare inter-rater agreement between manual EEG segmentations by experts with inter-rater agreement between human vs automatic segmentations, and investigate the robustness of segmentation quality to variations in algorithm parameter settings. We further compare the results of using these segmentations as input for calculating the burst suppression probability (BSP), a continuous measure of depth-of-suppression. Automated segmentation was comparable to manual segmentation, i.e. algorithm-vs-human agreement was comparable to human-vs-human agreement, as judged by comparing raw EEG segmentations or the derived BSP signals. Results were robust to modest variations in algorithm parameter settings. Our automated method satisfactorily segments burst suppression data across a wide range adult ICU EEG patterns. Performance is comparable to or exceeds that of manual segmentation by human electroencephalographers. Automated segmentation of burst suppression EEG patterns is an essential component of quantitative brain activity monitoring in critically ill and anesthetized adults. The segmentations produced by our algorithm provide a basis for accurate tracking of suppression depth. Copyright © 2013 Elsevier B.V. All rights reserved.
Commercialization of NESSUS: Status
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Millwater, Harry R.
1991-01-01
A plan was initiated in 1988 to commercialize the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) probabilistic structural analysis software. The goal of the on-going commercialization effort is to begin the transfer of Probabilistic Structural Analysis Method (PSAM) developed technology into industry and to develop additional funding resources in the general area of structural reliability. The commercialization effort is summarized. The SwRI NESSUS Software System is a general purpose probabilistic finite element computer program using state of the art methods for predicting stochastic structural response due to random loads, material properties, part geometry, and boundary conditions. NESSUS can be used to assess structural reliability, to compute probability of failure, to rank the input random variables by importance, and to provide a more cost effective design than traditional methods. The goal is to develop a general probabilistic structural analysis methodology to assist in the certification of critical components in the next generation Space Shuttle Main Engine.
Discussion for possibility of some aerodynamic ground effect craft
NASA Astrophysics Data System (ADS)
Tanabe, Yoshikazu
1990-05-01
Some type of pleasant, convenient, safe, and economical transportation method to supplement airplane transportation is currently required. This paper proposes an Aerodynamic Ground Effect Craft (AGEC) as this new transportation method, and studies its qualitative feasibility in comparison with present typical transportation methods such as transporter airplanes, flying boats, and linear motor cars which also have common characteristics of ultra low altitude cruising. Noteworthy points of AGEC are the effective energy consumption against transportation capacity (exergie) and the ultra low altitude cruising, which is relatively safer at the emergency landing than the subsonic airplane's body landing. Through AGEC has shorter cruising range and smaller transportation capacity, its transportation efficiency is superior to that of airplanes and linear motor cars. There is no critical difficulty in large sizing of AGEC, and AGEC is thought to be the very probable candidate which can supplement airplane transportation in the near future.
Optimum runway orientation relative to crosswinds
NASA Technical Reports Server (NTRS)
Falls, L. W.; Brown, S. C.
1972-01-01
Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
Qian, Yu; Cui, Xiaohua; Zheng, Zhigang
2017-07-18
The investigation of self-sustained oscillations in excitable complex networks is very important in understanding various activities in brain systems, among which the exploration of the key determinants of oscillations is a challenging task. In this paper, by investigating the influence of system parameters on self-sustained oscillations in excitable Erdös-Rényi random networks (EERRNs), the minimum Winfree loop (MWL) is revealed to be the key factor in determining the emergence of collective oscillations. Specifically, the one-to-one correspondence between the optimal connection probability (OCP) and the MWL length is exposed. Moreover, many important quantities such as the lower critical connection probability (LCCP), the OCP, and the upper critical connection probability (UCCP) are determined by the MWL. Most importantly, they can be approximately predicted by the network structure analysis, which have been verified in numerical simulations. Our results will be of great importance to help us in understanding the key factors in determining persistent activities in biological systems.
Delaying childbearing: effect of age on fecundity and outcome of pregnancy.
van Noord-Zaadstra, B M; Looman, C W; Alsbach, H; Habbema, J D; te Velde, E R; Karbaat, J
1991-01-01
OBJECTIVES--To study the age of the start of the fall (critical age) in fecundity; the probability of a pregnancy leading to a healthy baby taking into account the age of the woman; and, combining these results, to determine the age dependent probability of getting a healthy baby. DESIGN--Cohort study of all women who had entered a donor insemination programme. SETTING--Two fertility clinics serving a large part of The Netherlands. SUBJECTS--Of 1637 women attending for artificial insemination 751 fulfilled the selection criteria, being married to an azoospermic husband and nulliparous and never having received donor insemination before. MAIN OUTCOME MEASURES--The number of cycles before pregnancy (a positive pregnancy test result) or stopping treatment; and result of the pregnancy (successful outcome). RESULTS--Of the 751 women, 555 became pregnant and 461 had healthy babies. The fall in fecundity was estimated to start at around 31 years (critical age); after 12 cycles the probability of pregnancy in a woman aged greater than 31 was 0.54 compared with 0.74 in a woman aged 20.31. After 24 cycles this difference had decreased (probability of conception 0.75 in women greater than 31 and 0.85 in women 20.31). The probability of having a healthy baby also decreased--by 3.5% a year after the age of 30. Combining both these age effects, the chance of a woman aged 35 having a healthy baby was about half that of a woman aged 25. CONCLUSION--After the age of 31 the probability of conception falls rapidly, but this can be partly compensated for by continuing insemination for more cycles. In addition, the probability of an adverse pregnancy outcome starts to increase at about the same age. PMID:2059713
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
The influence of stem design on critical squeaking friction with ceramic bearings.
Fan, Na; Morlock, Michael M; Bishop, Nicholas E; Huber, Gerd; Hoffmann, Norbert; Ciavarella, Michele; Chen, Guang X; Hothan, Arne; Witt, Florian
2013-10-01
Ceramic-on-ceramic hip joints have been reported to squeak, a phenomenon that may occur in compromised lubrication conditions. One factor related to the incidence of in vivo squeaking is the stem design. However, it has not yet been possible to relate stem design to squeaking in deteriorating lubrication conditions. The purpose of this study was to determine critical friction factors for different stem designs. A hip simulator was used to measure the friction factor of a ceramic bearing with different stem designs and gradually deteriorating lubrication represented by evaporation of a volatile fluid lubricant. The critical squeaking friction factor was measured at the onset of squeaking for each stem. Critical friction was higher for the long cobalt chrome (0.32 ± 0.02) and short titanium stems (0.39 ± 0.02) in comparison with a long titanium stem (0.29 ± 0.02). The onset of squeaking occurred at a friction factor lower than that measured for dry conditions, in which squeaking is usually investigated experimentally. The results suggest that shorter or heavier stems might limit the possibility of squeaking as lubrication deteriorates. The method developed can be used to investigate the influence of design parameters on squeaking probability. Copyright © 2013 Orthopaedic Research Society.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.
2017-01-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.
Bayesian approach to analyzing holograms of colloidal particles.
Dimiduk, Thomas G; Manoharan, Vinothan N
2016-10-17
We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.
Transition Probabilities for Hydrogen-Like Atoms
NASA Astrophysics Data System (ADS)
Jitrik, Oliverio; Bunge, Carlos F.
2004-12-01
E1, M1, E2, M2, E3, and M3 transition probabilities for hydrogen-like atoms are calculated with point-nucleus Dirac eigenfunctions for Z=1-118 and up to large quantum numbers l=25 and n=26, increasing existing data more than a thousandfold. A critical evaluation of the accuracy shows a higher reliability with respect to previous works. Tables for hydrogen containing a subset of the results are given explicitly, listing the states involved in each transition, wavelength, term energies, statistical weights, transition probabilities, oscillator strengths, and line strengths. The complete results, including 1 863 574 distinct transition probabilities, lifetimes, and branching fractions are available at http://www.fisica.unam.mx/research/tables/spectra/1el
Applications of finite-size scaling for atomic and non-equilibrium systems
NASA Astrophysics Data System (ADS)
Antillon, Edwin A.
We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.
ERIC Educational Resources Information Center
Suh, Jennifer
2010-01-01
The following study describes design research in an elementary school near the metropolitan D.C. area with a diverse student population. The goal of the project was to design tasks that leveraged technology and enhance the access to critical thinking in specific mathematical concepts: data analysis and probability. It highlights the opportunities…
NASA Astrophysics Data System (ADS)
Marrero, J. M.; García, A.; Llinares, A.; Rodriguez-Losada, J. A.; Ortiz, R.
2012-03-01
One of the critical issues in managing volcanic crises is making the decision to evacuate a densely-populated region. In order to take a decision of such importance it is essential to estimate the cost in lives for each of the expected eruptive scenarios. One of the tools that assist in estimating the number of potential fatalities for such decision-making is the calculation of the FN-curves. In this case the FN-curve is a graphical representation that relates the frequency of the different hazards to be expected for a particular volcano or volcanic area, and the number of potential fatalities expected for each event if the zone of impact is not evacuated. In this study we propose a method for assessing the impact that a possible eruption from the Tenerife Central Volcanic Complex (CVC) would have on the population at risk. Factors taken into account include the spatial probability of the eruptive scenarios (susceptibility) and the temporal probability of the magnitudes of the eruptive scenarios. For each point or cell of the susceptibility map with greater probability, a series of probability-scaled hazard maps is constructed for the whole range of magnitudes expected. The number of potential fatalities is obtained from the intersection of the hazard maps with the spatial map of population distribution. The results show that the Emergency Plan for Tenerife must provide for the evacuation of more than 100,000 persons.
Force Transmission Modes of Non-Cohesive and Cohesive Materials at the Critical State.
Wang, Ji-Peng
2017-08-31
This paper investigates the force transmission modes, mainly described by probability density distributions, in non-cohesive dry and cohesive wet granular materials by discrete element modeling. The critical state force transmission patterns are focused on with the contact model effect being analyzed. By shearing relatively dense and loose dry specimens to the critical state in the conventional triaxial loading path, it is observed that there is a unique critical state force transmission mode. There is a universe critical state force distribution pattern for both the normal contact forces and tangential contact forces. Furthermore, it is found that using either the linear Hooke or the non-linear Hertz model does not affect the universe force transmission mode, and it is only related to the grain size distribution. Wet granular materials are also simulated by incorporating a water bridge model. Dense and loose wet granular materials are tested, and the critical state behavior for the wet material is also observed. The critical state strength and void ratio of wet granular materials are higher than those of a non-cohesive material. The critical state inter-particle distribution is altered from that of a non-cohesive material with higher probability in relatively weak forces. Grains in non-cohesive materials are under compressive stresses, and their principal directions are mainly in the axial loading direction. However, for cohesive wet granular materials, some particles are in tension, and the tensile stresses are in the horizontal direction on which the confinement is applied. The additional confinement by the tensile stress explains the macro strength and dilatancy increase in wet samples.
Force Transmission Modes of Non-Cohesive and Cohesive Materials at the Critical State
2017-01-01
This paper investigates the force transmission modes, mainly described by probability density distributions, in non-cohesive dry and cohesive wet granular materials by discrete element modeling. The critical state force transmission patterns are focused on with the contact model effect being analyzed. By shearing relatively dense and loose dry specimens to the critical state in the conventional triaxial loading path, it is observed that there is a unique critical state force transmission mode. There is a universe critical state force distribution pattern for both the normal contact forces and tangential contact forces. Furthermore, it is found that using either the linear Hooke or the non-linear Hertz model does not affect the universe force transmission mode, and it is only related to the grain size distribution. Wet granular materials are also simulated by incorporating a water bridge model. Dense and loose wet granular materials are tested, and the critical state behavior for the wet material is also observed. The critical state strength and void ratio of wet granular materials are higher than those of a non-cohesive material. The critical state inter-particle distribution is altered from that of a non-cohesive material with higher probability in relatively weak forces. Grains in non-cohesive materials are under compressive stresses, and their principal directions are mainly in the axial loading direction. However, for cohesive wet granular materials, some particles are in tension, and the tensile stresses are in the horizontal direction on which the confinement is applied. The additional confinement by the tensile stress explains the macro strength and dilatancy increase in wet samples. PMID:28858238
NASA Astrophysics Data System (ADS)
Lv, Zhong; Chen, Huisu
2014-10-01
Autonomous healing of cracks using pre-embedded capsules containing healing agent is becoming a promising approach to restore the strength of damaged structures. In addition to the material properties, the size and volume fraction of capsules influence crack healing in the matrix. Understanding the crack and capsule interaction is critical in the development and design of structures made of self-healing materials. Assuming that the pre-embedded capsules are randomly dispersed we theoretically model flat ellipsoidal crack interaction with capsules and determine the probability of a crack intersecting the pre-embedded capsules i.e. the self-healing probability. We also develop a probabilistic model of a crack simultaneously meeting with capsules and catalyst carriers in two-component self-healing system matrix. Using a risk-based healing approach, we determine the volume fraction and size of the pre-embedded capsules that are required to achieve a certain self-healing probability. To understand the effect of the shape of the capsules on self-healing we theoretically modeled crack interaction with spherical and cylindrical capsules. We compared the results of our theoretical model with Monte-Carlo simulations of crack interaction with capsules. The formulae presented in this paper will provide guidelines for engineers working with self-healing structures in material selection and sustenance.
NASA Technical Reports Server (NTRS)
Wiese, Wolfgang L.; Fuhr, J. R.
2006-01-01
We have undertaken new critical assessments and tabulations of the transition probabilities of important lines of these spectra. For Fe I and Fe II, we have carried out a complete re-assessment and update, and we have relied almost exclusively on the literature of the last 15 years. Our updates for C I, C II and N I, N II primarily address the persistent lower transitions as well as a greatly expanded number of forbidden lines (M1, M2, and E2). For these transitions, sophisticated multiconfiguration Hartree-Fock (MCHF) calculations have been recently carried out, which have yielded data considerably improved and often appreciably different from our 1996 NIST compilation.
Frozen into stripes: fate of the critical Ising model after a quench.
Blanchard, T; Picco, M
2013-09-01
In this article we study numerically the final state of the two-dimensional ferromagnetic critical Ising model after a quench to zero temperature. Beginning from equilibrium at T_{c}, the system can be blocked in a variety of infinitely long lived stripe states in addition to the ground state. Similar results have already been obtained for an infinite temperature initial condition and an interesting connection to exact percolation crossing probabilities has emerged. Here we complete this picture by providing an example of stripe states precisely related to initial crossing probabilities for various boundary conditions. We thus show that this is not specific to percolation but rather that it depends on the properties of spanning clusters in the initial state.
Curtivo, Cátia Panizzon Dal; Funghi, Nathália Bitencourt; Tavares, Guilherme Diniz; Barbosa, Sávio Fujita; Löbenberg, Raimar; Bou-Chacra, Nádia Araci
2015-05-01
In this work, near-infrared spectroscopy (NIRS) method was used to evaluate the uniformity of dosage units of three captopril 25 mg tablets commercial batches. The performance of the calibration method was assessed by determination of Q value (0.9986), standard error of estimation (C-set SEE = 1.956), standard error of prediction (V-set SEP = 2.076) as well as the consistency (106.1%). These results indicated the adequacy of the selected model. The method validation revealed the agreement of the reference high pressure liquid chromatography (HPLC) and NIRS methods. The process evaluation using the NIRS method showed that the variability was due to common causes and delivered predictable results consistently. Cp and Cpk values were, respectively, 2.05 and 1.80. These results revealed a non-centered process in relation to the average target (100% w/w), in the specified range (85-115%). The probability of failure was 21:100 million tablets of captopril. The NIRS in combination with the method of multivariate calibration, partial least squares (PLS) regression, allowed the development of methodology for the uniformity of dosage units evaluation of captopril tablets 25 mg. The statistical process control strategy associated with NIRS method as PAT played a critical role in understanding of the sources and degree of variation and its impact on the process. This approach led towards a better process understanding and provided the sound scientific basis for its continuous improvement.
Claycamp, H Gregg; Kona, Ravikanth; Fahmy, Raafat; Hoag, Stephen W
2016-04-01
Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.
He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei
2014-01-01
Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.
Thermodynamics and signatures of criticality in a network of neurons.
Tkačik, Gašper; Mora, Thierry; Marre, Olivier; Amodei, Dario; Palmer, Stephanie E; Berry, Michael J; Bialek, William
2015-09-15
The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. We suggest further tests of criticality, and give a brief discussion of its functional significance.
The expectancy-value muddle in the theory of planned behaviour - and some proposed solutions.
French, David P; Hankins, Matthew
2003-02-01
The authors of the Theories of Reasoned Action and Planned Behaviour recommended a method for statistically analysing the relationships between beliefs and the Attitude, Subjective Norm, and Perceived Behavioural Control constructs. This method has been used in the overwhelming majority of studies using these theories. However, there is a growing awareness that this method yields statistically uninterpretable results (Evans, 1991). Despite this, the use of this method is continuing, as is uninformed interpretation of this problematic research literature. This is probably due to the lack of a simple account of where the problem lies, and the large number of alternatives available. This paper therefore summarizes the problem as simply as possible, gives consideration to the conclusions that can be validly drawn from studies that contain this problem, and critically reviews the many alternatives that have been proposed to address this problem. Different techniques are identified as being suitable, according to the purpose of the specific research project.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Peng, Xiang; King, Irwin
2008-01-01
The Biased Minimax Probability Machine (BMPM) constructs a classifier which deals with the imbalanced learning tasks. It provides a worst-case bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data samples, and achieves promising performance. In this paper, we develop a novel yet critical extension training algorithm for BMPM that is based on Second-Order Cone Programming (SOCP). Moreover, we apply the biased classification model to medical diagnosis problems to demonstrate its usefulness. By removing some crucial assumptions in the original solution to this model, we make the new method more accurate and robust. We outline the theoretical derivatives of the biased classification model, and reformulate it into an SOCP problem which could be efficiently solved with global optima guarantee. We evaluate our proposed SOCP-based BMPM (BMPMSOCP) scheme in comparison with traditional solutions on medical diagnosis tasks where the objectives are to focus on improving the sensitivity (the accuracy of the more important class, say "ill" samples) instead of the overall accuracy of the classification. Empirical results have shown that our method is more effective and robust to handle imbalanced classification problems than traditional classification approaches, and the original Fractional Programming-based BMPM (BMPMFP).
Regional Earthquake Likelihood Models: A realm on shaky grounds?
NASA Astrophysics Data System (ADS)
Kossobokov, V.
2005-12-01
Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.
NASA Astrophysics Data System (ADS)
Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank
2015-04-01
Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.
Dynamics of influence and social balance in spatially-embedded regular and random networks
NASA Astrophysics Data System (ADS)
Singh, P.; Sreenivasan, S.; Szymanski, B.; Korniss, G.
2015-03-01
Structural balance - the tendency of social relationship triads to prefer specific states of polarity - can be a fundamental driver of beliefs, behavior, and attitudes on social networks. Here we study how structural balance affects deradicalization in an otherwise polarized population of leftists and rightists constituting the nodes of a low-dimensional social network. Specifically, assuming an externally moderating influence that converts leftists or rightists to centrists with probability p, we study the critical value p =pc , below which the presence of metastable mixed population states exponentially delay the achievement of centrist consensus. Above the critical value, centrist consensus is the only fixed point. Complementing our previously shown results for complete graphs, we present results for the process on low-dimensional networks, and show that the low-dimensional embedding of the underlying network significantly affects the critical value of probability p. Intriguingly, on low-dimensional networks, the critical value pc can show non-monotonicity as the dimensionality of the network is varied. We conclude by analyzing the scaling behavior of temporal variation of unbalanced triad density in the network for different low-dimensional network topologies. Supported in part by ARL NS-CTA, ONR, and ARO.
Measurement of the main and critical parameters for optimal laser treatment of heart disease
NASA Astrophysics Data System (ADS)
Kabeya, FB; Abrahamse, H.; Karsten, AE
2017-10-01
Laser light is frequently used in the diagnosis and treatment of patients. As in traditional treatments such as medication, bypass surgery, and minimally invasive ways, laser treatment can also fail and present serious side effects. The true reason for laser treatment failure or the side effects thereof, remains unknown. From the literature review conducted, and experimental results generated we conclude that an optimal laser treatment for coronary artery disease (named heart disease) can be obtained if certain critical parameters are correctly measured and understood. These parameters include the laser power, the laser beam profile, the fluence rate, the treatment time, as well as the absorption and scattering coefficients of the target treatment tissue. Therefore, this paper proposes different, accurate methods for the measurement of these critical parameters to determine the optimal laser treatment of heart disease with a minimal risk of side effects. The results from the measurement of absorption and scattering properties can be used in a computer simulation package to predict the fluence rate. The computing technique is a program based on the random number (Monte Carlo) process and probability statistics to track the propagation of photons through a biological tissue.
A Review of Methods for Detection of Hepatozoon Infection in Carnivores and Arthropod Vectors.
Modrý, David; Beck, Relja; Hrazdilová, Kristýna; Baneth, Gad
2017-01-01
Vector-borne protists of the genus Hepatozoon belong to the apicomplexan suborder Adeleorina. The taxonomy of Hepatozoon is unsettled and different phylogenetic clades probably represent evolutionary units deserving the status of separate genera. Throughout our review, we focus on the monophyletic assemblage of Hepatozoon spp. from carnivores, classified as Hepatozoon sensu stricto that includes important pathogens of domestic and free-ranging canine and feline hosts. We provide an overview of diagnostic methods and approaches from classical detection in biological materials, through serological tests to nucleic acid amplification tests (NAATs). Critical review of used primers for the 18S rDNA is provided, together with information on individual primer pairs. Extension of used NAATs target to cover also mitochondrial genes is suggested as a key step in understanding the diversity and molecular epidemiology of Hepatozoon infections in mammals.
2D dark-count-rate modeling of PureB single-photon avalanche diodes in a TCAD environment
NASA Astrophysics Data System (ADS)
Knežević, Tihomir; Nanver, Lis K.; Suligoj, Tomislav
2018-02-01
PureB silicon photodiodes have nm-shallow p+n junctions with which photons/electrons with penetration-depths of a few nanometer can be detected. PureB Single-Photon Avalanche Diodes (SPADs) were fabricated and analysed by 2D numerical modeling as an extension to TCAD software. The very shallow p+ -anode has high perimeter curvature that enhances the electric field. In SPADs, noise is quantified by the dark count rate (DCR) that is a measure for the number of false counts triggered by unwanted processes in the non-illuminated device. Just like for desired events, the probability a dark count increases with increasing electric field and the perimeter conditions are critical. In this work, the DCR was studied by two 2D methods of analysis: the "quasi-2D" (Q-2D) method where vertical 1D cross-sections were assumed for calculating the electron/hole avalanche-probabilities, and the "ionization-integral 2D" (II-2D) method where crosssections were placed where the maximum ionization-integrals were calculated. The Q-2D method gave satisfactory results in structures where the peripheral regions had a small contribution to the DCR, such as in devices with conventional deepjunction guard rings (GRs). Otherwise, the II-2D method proved to be much more precise. The results show that the DCR simulation methods are useful for optimizing the compromise between fill-factor and p-/n-doping profile design in SPAD devices. For the experimentally investigated PureB SPADs, excellent agreement of the measured and simulated DCR was achieved. This shows that although an implicit GR is attractively compact, the very shallow pn-junction gives a risk of having such a low breakdown voltage at the perimeter that the DCR of the device may be negatively impacted.
NASA Astrophysics Data System (ADS)
Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.
2003-04-01
Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perk, T; Bradshaw, T; Harmon, S
2015-06-15
Purpose: Identification of metastatic bone lesions is critical in prostate cancer, where treatments may be more effective in patients with fewer lesions. This study aims characterize the distribution and spread of bone lesions and create a probability map of metastatic spread in bone. Methods: Fifty-five metastatic castrate-resistant prostate cancer patients received up to 3 whole-body [F-18]NaF PET/CT scans. Lesions were identified by physician on PET/CT and contoured using a threshold of SUV>15. An atlas-based segmentation method was used to create CT regions, which determined skeletal location of lesions. Patients were divided into 3 groups with low (N<40), medium (40100) numbersmore » of lesions. A combination of articulated and deformable registrations was used to register the skeletal segments and lesions of each patient to a single skeleton. All the lesion data was then combined to make a probability map. Results: A total of 4038 metastatic lesions (mean 74, range 2–304) were identified. Skeletal regions with highest occurrence of lesions included ribs, thoracic spine, and pelvis with 21%, 19%, and 15% of the total number lesions and 8%, 18%, and 31 % of the total lesion volume, respectively. Interestingly, patients with fewer lesions were found to have a lower proportion of lesions in the ribs (9% in low vs. 27% in high number of lesions). Additionally, the probability map showed specific areas in the spine and pelvis where over 75% of patients had metastases, and other areas in the skeleton with a less than 2% of metastases. Conclusion: We identified skeletal regions with higher incidence of metastases and specific sub-regions in the skeleton that had high or low probability of occurrence of metastases. Additionally, we found that metastatic lesions in the ribs and skull occur more commonly in advanced disease. These results may have future applications in computer-aided diagnosis. Funding from the Prostate Cancer Foundation.« less
Sampling bees in tropical forests and agroecosystems: A review
Prado, Sara G.; Ngo, Hien T.; Florez, Jaime A.; Collazo, Jaime A.
2017-01-01
Bees are the predominant pollinating taxa, providing a critical ecosystem service upon which many angiosperms rely for successful reproduction. Available data suggests that bee populations worldwide are declining, but scarce data in tropical regions precludes assessing their status and distribution, impact on ecological services, and response to management actions. Herein, we reviewed >150 papers that used six common sampling methods (pan traps, baits, Malaise traps, sweep nets, timed observations and aspirators) to better understand their strengths and weaknesses, and help guide method selection to meet research objectives and development of multi-species monitoring approaches. Several studies evaluated the effectiveness of sweep nets, pan traps, and malaise traps, but only one evaluated timed observations, and none evaluated aspirators. Only five studies compared two or more of the remaining four sampling methods to each other. There was little consensus regarding which method would be most reliable for sampling multiple species. However, we recommend that if the objective of the study is to estimate abundance or species richness, malaise traps, pan traps and sweep nets are the most effective sampling protocols in open tropical systems; conversely, malaise traps, nets and baits may be the most effective in forests. Declining bee populations emphasize the critical need in method standardization and reporting precision. Moreover, we recommend reporting a catchability coefficient, a measure of the interaction between the resource (bee) abundance and catching effort. Melittologists could also consider existing methods, such as occupancy models, to quantify changes in distribution and abundance after modeling heterogeneity in trapping probability, and consider the possibility of developing monitoring frameworks that draw from multiple sources of data.
Ejiri, Shinji; Yamada, Norikazu
2013-04-26
Towards the feasibility study of the electroweak baryogenesis in realistic technicolor scenario, we investigate the phase structure of (2+N(f))-flavor QCD, where the mass of two flavors is fixed to a small value and the others are heavy. For the baryogenesis, an appearance of a first-order phase transition at finite temperature is a necessary condition. Using a set of configurations of two-flavor lattice QCD and applying the reweighting method, the effective potential defined by the probability distribution function of the plaquette is calculated in the presence of additional many heavy flavors. Through the shape of the effective potential, we determine the critical mass of heavy flavors separating the first-order and crossover regions and find it to become larger with N(f). We moreover study the critical line at finite density and the first-order region is found to become wider as increasing the chemical potential. Possible applications to real (2+1)-flavor QCD are discussed.
Critical review of the United Kingdom's "gold standard" survey of public attitudes to science.
Smith, Benjamin K; Jensen, Eric A
2016-02-01
Since 2000, the UK government has funded surveys aimed at understanding the UK public's attitudes toward science, scientists, and science policy. Known as the Public Attitudes to Science series, these surveys and their predecessors have long been used in UK science communication policy, practice, and scholarship as a source of authoritative knowledge about science-related attitudes and behaviors. Given their importance and the significant public funding investment they represent, detailed academic scrutiny of the studies is needed. In this essay, we critically review the most recently published Public Attitudes to Science survey (2014), assessing the robustness of its methods and claims. The review casts doubt on the quality of key elements of the Public Attitudes to Science 2014 survey data and analysis while highlighting the importance of robust quantitative social research methodology. Our analysis comparing the main sample and booster sample for young people demonstrates that quota sampling cannot be assumed equivalent to probability-based sampling techniques. © The Author(s) 2016.
Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance
NASA Astrophysics Data System (ADS)
Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra
2017-06-01
In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
Anders, N; Fernö, A; Humborstad, O-B; Løkkeborg, S; Rieucau, G; Utne-Palm, A C
2017-12-01
The present study tested whether the presence of already retained fishes inside baited fish pots acted as a social attraction and affected the entrance probability of Atlantic cod Gadus morhua in a fjord in northern Norway. Video analysis revealed that the probability of an entrance initially increased with the presence of low numbers of fishes inside the pot, but subsequently decreased at a critical number of caught fishes. The critical number was dependent on the size of the G. morhua attempting to enter. This demonstrates that social attraction and repulsion play a role in G. morhua pot fishing and has important implications for the capture efficiency of fisheries executed with pots. © 2017 The Fisheries Society of the British Isles.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication. Copyright © 2013 Elsevier Inc. All rights reserved.
Gonzales, J. L.; Elbers, A. R. W.; Bouma, A.; Koch, G.; De Wit, J. J.; Stegeman, J. A.
2010-01-01
Please cite this paper as: Gonzales et al. (2010) Low‐pathogenic notifiable avian influenza serosurveillance and the risk of infection in poultry – a critical review of the European Union active surveillance programme (2005–2007). Influenza and Other Respiratory Viruses 4(2), 91–99. Background Since 2003, Member States (MS) of the European Union (EU) have implemented serosurveillance programmes for low pathogenic notifiable avian influenza (LPNAI) in poultry. To date, there is the need to evaluate the surveillance activity in order to optimize the programme’s surveillance design. Objectives To evaluate MS sampling operations [sample size and targeted poultry types (PTs)] and its relation with the probability of detection and to estimate the PTs relative risk (RR) of being infected. Methods Reported data of the surveillance carried out from 2005 to 2007 were analyzed using: (i) descriptive indicators to characterize both MS sampling operations and its relation with the probability of detection and the LPNAI epidemiological situation, and (ii) multivariable methods to estimate each PTs RR of being infected. Results Member States sampling a higher sample size than that recommended by the EU had a significantly higher probability of detection. Poultry types with ducks & geese, game‐birds, ratites and “others” had a significant higher RR of being seropositive than chicken categories. The seroprevalence in duck & geese and game‐bird holdings appears to be higher than 5%, which is the EU‐recommended design prevalence (DP), while in chicken and turkey categories the seroprevalence was considerably lower than 5% and with that there is the risk of missing LPNAI seropositive holdings. Conclusion It is recommended that the European Commission discusses with its MS whether the results of our evaluation calls for refinement of the surveillance characteristics such as sampling frequency, the between‐holding DP and MS sampling operation strategies. PMID:20167049
SU-D-BRB-01: A Predictive Planning Tool for Stereotactic Radiosurgery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palefsky, S; Roper, J; Elder, E
Purpose: To demonstrate the feasibility of a predictive planning tool which provides SRS planning guidance based on simple patient anatomical properties: PTV size, PTV shape and distance from critical structures. Methods: Ten framed SRS cases treated at Winship Cancer Institute of Emory University were analyzed to extract data on PTV size, sphericity (shape), and distance from critical structures such as the brainstem and optic chiasm. The cases consisted of five pairs. Each pair consisted of two cases with a similar diagnosis (such as pituitary adenoma or arteriovenous malformation) that were treated with different techniques: DCA, or IMRS. A Naive Bayesmore » Classifier was trained on this data to establish the conditions under which each treatment modality was used. This model was validated by classifying ten other randomly-selected cases into DCA or IMRS classes, calculating the probability of each technique, and comparing results to the treated technique. Results: Of the ten cases used to validate the model, nine had their technique predicted correctly. The three cases treated with IMRS were all identified as such. Their probabilities of being treated with IMRS ranged between 59% and 100%. Six of the seven cases treated with DCA were correctly classified. These probabilities ranged between 51% and 95%. One case treated with DCA was incorrectly predicted to be an IMRS plan. The model’s confidence in this case was 91%. Conclusion: These findings indicate that a predictive planning tool based on simple patient anatomical properties can predict the SRS technique used for treatment. The algorithm operated with 90% accuracy. With further validation on larger patient populations, this tool may be used clinically to guide planners in choosing an appropriate treatment technique. The prediction algorithm could also be adapted to guide selection of treatment parameters such as treatment modality and number of fields for radiotherapy across anatomical sites.« less
Self-organized energetic model for collective activity on animal tissue
NASA Astrophysics Data System (ADS)
Dos Santos, Michelle C. Varela; Macedo-Filho, Antonio; Dos Santos Lima, Gustavo Zampier; Corso, Gilberto
We construct a self-organized critical (SOC) model to explain spontaneous collective activity in animal tissue without the necessity of a muscular or a central control nervous system. Our prototype model is an epithelial cuboid tissue formed by a single layer of cells as the internal digestive cavity of primitive animals. The tissue is composed by cells that absorb nutrients and store energy, with probability p, to participate in a collective tissue activity. Each cell can be in two states: at high energy and able to became active or at low metabolic energy and remain at rest. Any cell can spontaneously, with a very low probability, spark a collective activity across its neighbors that share a minimal energy. Cells participating in tissue activity consume all their energy. A power-law relation P(s)∝sγ for the probability of having a collective activity with s cells is observed. By construction this model is analogue to the forest fire SOC model. Our approach produces naturally a critical state for the activity in animal tissue, besides it explains self-sustained activity in a living animal tissue without feedback control.
NASA Astrophysics Data System (ADS)
Le, Jia-Liang; Bažant, Zdeněk P.
2011-07-01
This paper extends the theoretical framework presented in the preceding Part I to the lifetime distribution of quasibrittle structures failing at the fracture of one representative volume element under constant amplitude fatigue. The probability distribution of the critical stress amplitude is derived for a given number of cycles and a given minimum-to-maximum stress ratio. The physical mechanism underlying the Paris law for fatigue crack growth is explained under certain plausible assumptions about the damage accumulation in the cyclic fracture process zone at the tip of subcritical crack. This law is then used to relate the probability distribution of critical stress amplitude to the probability distribution of fatigue lifetime. The theory naturally yields a power-law relation for the stress-life curve (S-N curve), which agrees with Basquin's law. Furthermore, the theory indicates that, for quasibrittle structures, the S-N curve must be size dependent. Finally, physical explanation is provided to the experimentally observed systematic deviations of lifetime histograms of various ceramics and bones from the Weibull distribution, and their close fits by the present theory are demonstrated.
Impact of contrarians and intransigents in a kinetic model of opinion dynamics
NASA Astrophysics Data System (ADS)
Crokidakis, Nuno; Blanco, Victor H.; Anteneodo, Celia
2014-01-01
In this work we study opinion formation on a fully connected population participating of a public debate with two distinct choices, where the agents may adopt three different attitudes (favorable to either one choice or to the other, or undecided). The interactions between agents occur by pairs and are competitive, with couplings that are either negative with probability p or positive with probability 1-p. This bimodal probability distribution of couplings produces a behavior similar to the one resulting from the introduction of Galam's contrarians in the population. In addition, we consider that a fraction d of the individuals are intransigent, that is, reluctant to change their opinions. The consequences of the presence of contrarians and intransigents are studied by means of computer simulations. Our results suggest that the presence of inflexible agents affects the critical behavior of the system, causing either the shift of the critical point or the suppression of the ordering phase transition, depending on the groups of opinions to which the intransigents belong. We also discuss the relevance of the model for real social systems.
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
NASA Astrophysics Data System (ADS)
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-01
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the infrared universality of higher-order cumulants and the method of superposition and show how to model BEC statistics in the actual traps. In particular, we find that the three-level trap model with matching the first four or five cumulants is enough to yield remarkably accurate results for all interesting quantities in the whole critical region. We derive an exact multinomial expansion for the noncondensate occupation probability distribution and find its high-temperature asymptotics (Poisson distribution) and corrections to it. Finally, we demonstrate that the critical exponents and a few known terms of the Taylor expansion of the universal functions, which were calculated previously from fitting the finite-size simulations within the phenomenological renormalization-group theory, can be easily obtained from the presented full analytical solutions for the mesoscopic BEC as certain approximations in the close vicinity of the critical point.
[Concept analysis of reflective thinking].
Van Vuuren, M; Botes, A
1999-09-01
The nursing practice is described as a scientific practice, but also as a practice where caring is important. The purpose of nursing education is to provide competent nursing practitioners. This implies that future practitioners must have both critical analytical thinking abilities, as well as empathy and moral values. Reflective thinking could probably accommodate these thinking skills. It seems that the facilitation of reflective thinking skills is essential in nursing education. The research question that is relevant in this context is: "What is reflective thinking?" The purpose of this article is to report on the concept analysis of reflective thinking and in particular on the connotative meaning (critical attributes) thereof. The method used to perform the concept analysis is based on the original method of Wilson (1987) as described by Walker & Avant (1995). As part of the concept analysis the connotations (critical attributes) are identified, reduced and organized into three categories, namely pre-requisites, processes and outcomes. A model case is described which confirms the essential critical attributes of reflective thinking. Finally a theoretical definition of reflective thinking is derived and reads as follows: Reflective thinking is a cyclic, hierarchical and interactive construction process. It is initiated, extended and continued because of personal cognitive-affective interaction (individual dimension) as well as interaction with the social environment (social dimension). to realize reflective thinking, a level of internalization on the cognitive and affective domain is required. The result of reflective thinking is a integrated framework of knowledge (meaningful learning) and a internalized value system providing a new perspective on and better understanding of a problem. Reflective thinking further leads to more effective decision making- and problem solving skills.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics.
Williams, Perry J; Hooten, Mevin B; Womble, Jamie N; Esslinger, George G; Bower, Michael R; Hefley, Trevor J
2017-02-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska. © 2016 by the Ecological Society of America.
Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence
MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.
2006-01-01
This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.
Ultrasonic Phased Array Simulations of Welded Components at NASA
NASA Technical Reports Server (NTRS)
Roth, D. J.; Tokars, R. P.; Martin, R. E.; Rauser, R. W.; Aldrin, J. C.
2009-01-01
Comprehensive and accurate inspections of welded components have become of increasing importance as NASA develops new hardware such as Ares rocket segments for future exploration missions. Simulation and modeling will play an increasing role in the future for nondestructive evaluation in order to better understand the physics of the inspection process, to prove or disprove the feasibility for an inspection method or inspection scenario, for inspection optimization, for better understanding of experimental results, and for assessment of probability of detection. This study presents simulation and experimental results for an ultrasonic phased array inspection of a critical welded structure important for NASA future exploration vehicles. Keywords: nondestructive evaluation, computational simulation, ultrasonics, weld, modeling, phased array
Nanoplasmonic imaging of latent fingerprints with explosive RDX residues.
Peng, Tianhuan; Qin, Weiwei; Wang, Kun; Shi, Jiye; Fan, Chunhai; Li, Di
2015-09-15
Explosive detection is a critical element in preventing terrorist attacks, especially in crowded and influential areas. It is probably more important to establish the connection of explosive loading with a carrier's personal identity. In the present work, we introduce fingerprinting as physical personal identification and develop a nondestructive nanoplasmonic method for the imaging of latent fingerprints. We further integrate the nanoplasmonic response of catalytic growth of Au NPs with NADH-mediated reduction of 1,3,5-trinitro-1,3,5-triazinane (RDX) for the quantitative analysis of RDX explosive residues in latent fingerprints. This generic nanoplasmonic strategy is expected to be used in forensic investigation to distinguish terrorists that carry explosives.
A Statistical Test of Correlations and Periodicities in the Geological Records
NASA Astrophysics Data System (ADS)
Yabushita, S.
1997-09-01
Matsumoto & Kubotani argued that there is a positive and statistically significant correlation between cratering and mass extinction. This argument is critically examined by adopting a method of Ertel used by Matsumoto & Kubotani but by applying it more directly to the extinction and cratering records. It is shown that on the null-hypothesis of random distribution of crater ages, the observed correlation has a probability of occurrence of 13%. However, when large craters are excluded whose ages agree with the times of peaks of extinction rate of marine fauna, one obtains a negative correlation. This result strongly indicates that mass extinction are not due to accumulation of impacts but due to isolated gigantic impacts.
Medical Optimization Network for Space Telemedicine Resources
NASA Technical Reports Server (NTRS)
Shah, R. V.; Mulcahy, R.; Rubin, D.; Antonsen, E. L.; Kerstman, E. L.; Reyes, D.
2017-01-01
INTRODUCTION: Long-duration missions beyond low Earth orbit introduce new constraints to the space medical system such as the inability to evacuate to Earth, communication delays, and limitations in clinical skillsets. NASA recognizes the need to improve capabilities for autonomous care on such missions. As the medical system is developed, it is important to have an ability to evaluate the trade space of what resources will be most important. The Medical Optimization Network for Space Telemedicine Resources was developed for this reason, and is now a system to gauge the relative importance of medical resources in addressing medical conditions. METHODS: A list of medical conditions of potential concern for an exploration mission was referenced from the Integrated Medical Model, a probabilistic model designed to quantify in-flight medical risk. The diagnostic and treatment modalities required to address best and worst-case scenarios of each medical condition, at the terrestrial standard of care, were entered into a database. This list included tangible assets (e.g. medications) and intangible assets (e.g. clinical skills to perform a procedure). A team of physicians working within the Exploration Medical Capability Element of NASA's Human Research Program ranked each of the items listed according to its criticality. Data was then obtained from the IMM for the probability of occurrence of the medical conditions, including a breakdown of best case and worst case, during a Mars reference mission. The probability of occurrence information and criticality for each resource were taken into account during analytics performed using Tableau software. RESULTS: A database and weighting system to evaluate all the diagnostic and treatment modalities was created by combining the probability of condition occurrence data with the criticalities assigned by the physician team. DISCUSSION: Exploration Medical Capabilities research at NASA is focused on providing a medical system to support crew medical needs in the context of a Mars mission. MONSTR is a novel approach to performing a quantitative risk analysis that will assess the relative value of individual resources needed for the diagnosis and treatment of various medical conditions. It will provide the operational and research communities at NASA with information to support informed decisions regarding areas of research investment, future crew training, and medical supplies manifested as part of the exploration medical system.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, L.L.; Wilson, J.R.; Sanchez, L.C.
1998-10-01
The US Department of Energy Office of Environmental Management's (DOE/EM's) National Spent Nuclear Fuel Program (NSNFP), through a collaboration between Sandia National Laboratories (SNL) and Idaho National Engineering and Environmental Laboratory (INEEL), is conducting a systematic Nuclear Dynamics Consequence Analysis (NDCA) of the disposal of SNFs in an underground geologic repository sited in unsaturated tuff. This analysis is intended to provide interim guidance to the DOE for the management of the SNF while they prepare for final compliance evaluation. This report presents results from a Nuclear Dynamics Consequence Analysis (NDCA) that examined the potential consequences and risks of criticality duringmore » the long-term disposal of spent nuclear fuel owned by DOE-EM. This analysis investigated the potential of post-closure criticality, the consequences of a criticality excursion, and the probability frequency for post-closure criticality. The results of the NDCA are intended to provide the DOE-EM with a technical basis for measuring risk which can be used for screening arguments to eliminate post-closure criticality FEPs (features, events and processes) from consideration in the compliance assessment because of either low probability or low consequences. This report is composed of an executive summary (Volume 1), the methodology and results of the NDCA (Volume 2), and the applicable appendices (Volume 3).« less
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Zhang, Jian; Zhang, Xin; Bi, Yu-An; Xu, Gui-Hong; Huang, Wen-Zhe; Wang, Zhen-Zhong; Xiao, Wei
2017-09-01
The "design space" method was used to optimize the purification process of Resina Draconis phenol extracts by using the concept of "quality derived from design" (QbD). The content and transfer rate of laurin B and 7,4'-dihydroxyflavone and yield of extract were selected as the critical quality attributes (CQA). Plackett-Burman design showed that the critical process parameters (CPP) were concentration of alkali, the amount of alkali and the temperature of alkali dissolution. Then the Box-Behnken design was used to establish the mathematical model between CQA and CPP. The variance analysis results showed that the P values of the five models were less than 0.05 and the mismatch values were all greater than 0.05, indicating that the model could well describe the relationship between CQA and CPP. Finally, the control limits of the above 5 indicators (content and transfer rate of laurine B and 7,4'-dihydroxyflavone, as well as the extract yield) were set, and then the probability-based design space was calculated by Monte Carlo simulation and verified. The results of the design space validation showed that the optimized purification method can ensure the stability of the Resina Draconis phenol extracts refining process, which would help to improve the quality uniformity between batches of phenol extracts and provide data support for production automation control. Copyright© by the Chinese Pharmaceutical Association.
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.
Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy
2006-05-01
To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.
Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R
2016-01-01
Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ekonomou, L.; Karampelas, P.; Vita, V.; Chatzarakis, G. E.
2011-04-01
One of the most popular methods of protecting high voltage transmission lines against lightning strikes and internal overvoltages is the use of arresters. The installation of arresters in high voltage transmission lines can prevent or even reduce the lines' failure rate. Several studies based on simulation tools have been presented in order to estimate the critical currents that exceed the arresters' rated energy stress and to specify the arresters' installation interval. In this work artificial intelligence, and more specifically a Q-learning artificial neural network (ANN) model, is addressed for evaluating the arresters' failure probability. The aims of the paper are to describe in detail the developed Q-learning ANN model and to compare the results obtained by its application in operating 150 kV Greek transmission lines with those produced using a simulation tool. The satisfactory and accurate results of the proposed ANN model can make it a valuable tool for designers of electrical power systems seeking more effective lightning protection, reducing operational costs and better continuity of service.
Reliability of Radioisotope Stirling Convertor Linear Alternator
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.
2006-01-01
Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.
Reliability and Probabilistic Risk Assessment - How They Play Together
NASA Technical Reports Server (NTRS)
Safie, Fayssal; Stutts, Richard; Huang, Zhaofeng
2015-01-01
Since the Space Shuttle Challenger accident in 1986, NASA has extensively used probabilistic analysis methods to assess, understand, and communicate the risk of space launch vehicles. Probabilistic Risk Assessment (PRA), used in the nuclear industry, is one of the probabilistic analysis methods NASA utilizes to assess Loss of Mission (LOM) and Loss of Crew (LOC) risk for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability distributions to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: 1) what can go wrong that would lead to loss or degraded performance (i.e., scenarios involving undesired consequences of interest), 2) how likely is it (probabilities), and 3) what is the severity of the degradation (consequences). Since the Challenger accident, PRA has been used in supporting decisions regarding safety upgrades for launch vehicles. Another area that was given a lot of emphasis at NASA after the Challenger accident is reliability engineering. Reliability engineering has been a critical design function at NASA since the early Apollo days. However, after the Challenger accident, quantitative reliability analysis and reliability predictions were given more scrutiny because of their importance in understanding failure mechanism and quantifying the probability of failure, which are key elements in resolving technical issues, performing design trades, and implementing design improvements. Although PRA and reliability are both probabilistic in nature and, in some cases, use the same tools, they are two different activities. Specifically, reliability engineering is a broad design discipline that deals with loss of function and helps understand failure mechanism and improve component and system design. PRA is a system scenario based risk assessment process intended to assess the risk scenarios that could lead to a major/top undesirable system event, and to identify those scenarios that are high-risk drivers. PRA output is critical to support risk informed decisions concerning system design. This paper describes the PRA process and the reliability engineering discipline in detail. It discusses their differences and similarities and how they work together as complementary analyses to support the design and risk assessment processes. Lessons learned, applications, and case studies in both areas are also discussed in the paper to demonstrate and explain these differences and similarities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Ingelise, E-mail: inje@rn.d; Carl, Jesper; Lund, Bente
2011-07-01
Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on themore » Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications.« less
Feedback produces divergence from prospect theory in descriptive choice.
Jessup, Ryan K; Bishara, Anthony J; Busemeyer, Jerome R
2008-10-01
A recent study demonstrated that individuals making experience-based choices underweight small probabilities, in contrast to the overweighting observed in a typical descriptive paradigm. We tested whether trial-by-trial feedback in a repeated descriptive paradigm would engender choices more correspondent with experiential or descriptive paradigms. The results of a repeated gambling task indicated that individuals receiving feedback underweighted small probabilities, relative to their no-feedback counterparts. These results implicate feedback as a critical component during the decision-making process, even in the presence of fully specified descriptive information. A model comparison at the individual-subject level suggested that feedback drove individuals' decision weights toward objective probability weighting.
Influence of multiple categories on the prediction of unknown properties
Verde, Michael F.; Murphy, Gregory L.; Ross, Brian H.
2006-01-01
Knowing an item's category helps us predict its unknown properties. Previous studies suggest that when asked to evaluate the probability of an unknown property, people tend to consider only an item's most likely category, ignoring alternative categories. In the present study, property prediction took the form of either a probability rating or a speeded, binary-choice judgment. Consistent with past findings, subjects ignored alternative categories in their probability ratings. However, their binary-choice judgments were influenced by alternative categories. This novel finding suggests that the way category knowledge is used in prediction depends critically on the form of the prediction. PMID:16156183
Size Fluctuations of Near Critical Nuclei and Gibbs Free Energy for Nucleation of BDA on Cu(001)
NASA Astrophysics Data System (ADS)
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J. W.; Poelsema, Bene
2012-07-01
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Size fluctuations of near critical nuclei and Gibbs free energy for nucleation of BDA on Cu(001).
Schwarz, Daniel; van Gastel, Raoul; Zandvliet, Harold J W; Poelsema, Bene
2012-07-06
We present a low-energy electron microscopy study of nucleation and growth of BDA on Cu(001) at low supersaturation. At sufficiently high coverage, a dilute BDA phase coexists with c(8×8) crystallites. The real-time microscopic information allows a direct visualization of near-critical nuclei, determination of the supersaturation and the line tension of the crystallites, and, thus, derivation of the Gibbs free energy for nucleation. The resulting critical nucleus size nicely agrees with the measured value. Nuclei up to 4-6 times larger still decay with finite probability, urging reconsideration of the classic perception of a critical nucleus.
Self-organized criticality in a cold plasma
NASA Astrophysics Data System (ADS)
Alex, Prince; Carreras, Benjamin Andres; Arumugam, Saravanan; Sinha, Suraj Kumar
2017-12-01
We present direct evidence for the existence of self-organized critical behavior in cold plasma. A multiple anodic double layer structure generated in a double discharge plasma setup shows critical behavior for the anode bias above a threshold value. Analysis of the floating potential fluctuations reveals the existence of long-range time correlations and power law behavior in the tail of the probability distribution function of the fluctuations. The measured Hurst exponent and the power law tail in the rank function are strong indication of the self-organized critical behavior of the system and hence provide a condition under which complexities arise in cold plasma.
The diagnostic value of troponin in critically ill.
Voga, Gorazd
2010-01-01
Troponin T and I are sensitive and specific markers of myocardial necrosis. They are used for the routine diagnosis of acute coronary syndrome. In critically ill patients they are basic diagnostic tool for diagnosis of myocardial necrosis due to myocardial ischemia. Moreover, the increase of troponin I and T is related with adverse outcome in many subgroups of critically ill patients. The new, high sensitivity tests which have been developed recently allow earlier and more accurate diagnosis of acute coronary syndrome. The use of the new tests has not been studied in critically ill patients, but they will probably replace the old tests and will be used on the routine basis.
Cosmological implications of Higgs near-criticality
NASA Astrophysics Data System (ADS)
Espinosa, J. R.
2018-01-01
The Standard Model electroweak (EW) vacuum, in the absence of new physics below the Planck scale, lies very close to the boundary between stability and metastability, with the last option being the most probable. Several cosmological implications of this so-called `near-criticality' are discussed. In the metastable vacuum case, the main challenges that the survival of the EW vacuum faces during the evolution of the Universe are analysed. In the stable vacuum case, the possibility of implementing Higgs inflation is critically examined. This article is part of the Theo Murphy meeting issue `Higgs cosmology'.
Transition probability, dynamic regimes, and the critical point of financial crisis
NASA Astrophysics Data System (ADS)
Tang, Yinan; Chen, Ping
2015-07-01
An empirical and theoretical analysis of financial crises is conducted based on statistical mechanics in non-equilibrium physics. The transition probability provides a new tool for diagnosing a changing market. Both calm and turbulent markets can be described by the birth-death process for price movements driven by identical agents. The transition probability in a time window can be estimated from stock market indexes. Positive and negative feedback trading behaviors can be revealed by the upper and lower curves in transition probability. Three dynamic regimes are discovered from two time periods including linear, quasi-linear, and nonlinear patterns. There is a clear link between liberalization policy and market nonlinearity. Numerical estimation of a market turning point is close to the historical event of the US 2008 financial crisis.
Manktelow, Bradley N.; Seaton, Sarah E.
2012-01-01
Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which interpolation method is to be used prior to any analysis. PMID:23029202
Guo, Yan; Li, Xiaoming; Fang, Xiaoyi; Lin, Xiuyun; Song, Yan; Jiang, Shuling; Stanton, Bonita
2011-01-01
Sample representativeness remains one of the challenges in effective HIV/STD surveillance and prevention targeting MSM worldwide. Although convenience samples are widely used in studies of MSM, previous studies suggested that these samples might not be representative of the broader MSM population. This issue becomes even more critical in many developing countries where needed resources for conducting probability sampling are limited. We examined variations in HIV and Syphilis infections and sociodemographic and behavioral factors among 307 young migrant MSM recruited using four different convenience sampling methods (peer outreach, informal social network, Internet, and venue-based) in Beijing, China in 2009. The participants completed a self-administered survey and provided blood specimens for HIV/STD testing. Among the four MSM samples using different recruitment methods, rates of HIV infections were 5.1%, 5.8%, 7.8%, and 3.4%; rates of Syphilis infection were 21.8%, 36.2%, 11.8%, and 13.8%; rates of inconsistent condom use were 57%, 52%, 58%, and 38%. Significant differences were found in various sociodemographic characteristics (e.g., age, migration history, education, income, places of employment) and risk behaviors (e.g., age at first sex, number of sex partners, involvement in commercial sex, and substance use) among samples recruited by different sampling methods. The results confirmed the challenges of obtaining representative MSM samples and underscored the importance of using multiple sampling methods to reach MSM from diverse backgrounds and in different social segments and to improve the representativeness of the MSM samples when the use of probability sampling approach is not feasible. PMID:21711162
Guo, Yan; Li, Xiaoming; Fang, Xiaoyi; Lin, Xiuyun; Song, Yan; Jiang, Shuling; Stanton, Bonita
2011-11-01
Sample representativeness remains one of the challenges in effective HIV/STD surveillance and prevention targeting men who have sex with men (MSM) worldwide. Although convenience samples are widely used in studies of MSM, previous studies suggested that these samples might not be representative of the broader MSM population. This issue becomes even more critical in many developing countries where needed resources for conducting probability sampling are limited. We examined variations in HIV and Syphilis infections and sociodemographic and behavioral factors among 307 young migrant MSM recruited using four different convenience sampling methods (peer outreach, informal social network, Internet, and venue-based) in Beijing, China in 2009. The participants completed a self-administered survey and provided blood specimens for HIV/STD testing. Among the four MSM samples using different recruitment methods, rates of HIV infections were 5.1%, 5.8%, 7.8%, and 3.4%; rates of Syphilis infection were 21.8%, 36.2%, 11.8%, and 13.8%; and rates of inconsistent condom use were 57%, 52%, 58%, and 38%. Significant differences were found in various sociodemographic characteristics (e.g., age, migration history, education, income, and places of employment) and risk behaviors (e.g., age at first sex, number of sex partners, involvement in commercial sex, and substance use) among samples recruited by different sampling methods. The results confirmed the challenges of obtaining representative MSM samples and underscored the importance of using multiple sampling methods to reach MSM from diverse backgrounds and in different social segments and to improve the representativeness of the MSM samples when the use of probability sampling approach is not feasible.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
NASA Astrophysics Data System (ADS)
Setar, Katherine Marie
1997-08-01
This dissertation analytically and critically examines composer Pauline Oliveros's philosophy of 'listening' as it applies to selected works created between 1961 and 1984. The dissertation is organized through the application of two criteria: three perspectives of listening (empirical, phenomenal, and, to a lesser extent, personal), and categories derived, in part, from her writings and interviews (improvisational, traditional, theatrical, electronic, meditational, and interactive). In general, Oliveros's works may be categorized by one of two listening perspectives. The 'empirical' listening perspective, which generally includes pure acoustic phenomenon, independent from human interpretation, is exemplified in the analyses of Sound Patterns (1961), OH HA AH (1968), and, to a lesser extent, I of IV (1966). The 'phenomenal' listening perspective, which involves the human interaction with the pure acoustic phenomenon, includes a critical examination of her post-1971 'meditation' pieces and an analytical and critical examination of her tonal 'interactive' improvisations in highly resonant space, such as Watertank Software (1984). The most pervasive element of Oliveros's stylistic evolution is her gradual change from the hierarchical aesthetic of the traditional composer, to one in which creative control is more equally shared by all participants. Other significant contributions by Oliveros include the probable invention of the 'meditation' genre, an emphasis on the subjective perceptions of musical participants as a means to greater musical awareness, her musical exploration of highly resonant space, and her pioneering work in American electronic music. Both analytical and critical commentary were applied to selective representative works from Oliveros's six compositional categories. The analytical methods applied to the Oliveros's works include Wayne Slawson's vowel/formant theory as described in his book, Sound Color, an original method of categorizing consonants as noise sources based upon the principles of the International Phonetic Association, traditional morphological analyses, linear-extrapolation analyses which are derived from Schenker's theory, and discussions of acoustic phenomena as they apply to such practices as 1960s electronic studio techniques and the dynamics of room acoustics.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Cramer, Emily
2016-01-01
Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598
Royle, J. Andrew; Chandler, Richard B.; Yackulic, Charles; Nichols, James D.
2012-01-01
1. Understanding the factors affecting species occurrence is a pre-eminent focus of applied ecological research. However, direct information about species occurrence is lacking for many species. Instead, researchers sometimes have to rely on so-called presence-only data (i.e. when no direct information about absences is available), which often results from opportunistic, unstructured sampling. MAXENT is a widely used software program designed to model and map species distribution using presence-only data. 2. We provide a critical review of MAXENT as applied to species distribution modelling and discuss how it can lead to inferential errors. A chief concern is that MAXENT produces a number of poorly defined indices that are not directly related to the actual parameter of interest – the probability of occurrence (ψ). This focus on an index was motivated by the belief that it is not possible to estimate ψ from presence-only data; however, we demonstrate that ψ is identifiable using conventional likelihood methods under the assumptions of random sampling and constant probability of species detection. 3. The model is implemented in a convenient r package which we use to apply the model to simulated data and data from the North American Breeding Bird Survey. We demonstrate that MAXENT produces extreme under-predictions when compared to estimates produced by logistic regression which uses the full (presence/absence) data set. We note that MAXENT predictions are extremely sensitive to specification of the background prevalence, which is not objectively estimated using the MAXENT method. 4. As with MAXENT, formal model-based inference requires a random sample of presence locations. Many presence-only data sets, such as those based on museum records and herbarium collections, may not satisfy this assumption. However, when sampling is random, we believe that inference should be based on formal methods that facilitate inference about interpretable ecological quantities instead of vaguely defined indices.
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Smith, David G.; Bailey, Robin J.
2018-01-01
Astrochronology employs spectral analysis of stratigraphic data series to substantiate and quantify depositional cyclicity, and thus to establish a probable causal link between cases of rhythmic bedding and periodic orbitally-forced climate change. Vaughan et al. (2011 - not cited by Ruhl et al.) showed that the spectral methods conventionally used in cyclostratigraphy will generate false positive results - they will identify multiple cycles that are not present in the data. Tests with synthetic random datasets are both a simple and an essential way to prove this. Ruhl et al. (2016) used the methods to which these criticisms apply in their analysis of XRF-compositional data series from the Early Jurassic of the Mochras borehole, Wales. We use properly corrected methods to re-examine some of their data, showing that their spectral results are not valid, thus casting doubt on their proposed calibration of Pliensbachian time.
An Introduction to Using Surface Geophysics to Characterize Sand and Gravel Deposits
Lucius, Jeffrey E.; Langer, William H.; Ellefsen, Karl J.
2006-01-01
This report is an introduction to surface geophysical techniques that aggregate producers can use to characterize known deposits of sand and gravel. Five well-established and well-tested geophysical methods are presented: seismic refraction and reflection, resistivity, ground penetrating radar, time-domain electromagnetism, and frequency-domain electromagnetism. Depending on site conditions and the selected method(s), geophysical surveys can provide information concerning aerial extent and thickness of the deposit, thickness of overburden, depth to the water table, critical geologic contacts, and location and correlation of geologic features. In addition, geophysical surveys can be conducted prior to intensive drilling to help locate auger or drill holes, reduce the number of drill holes required, calculate stripping ratios to help manage mining costs, and provide continuity between sampling sites to upgrade the confidence of reserve calculations from probable reserves to proved reserves. Perhaps the greatest value of geophysics to aggregate producers may be the speed of data acquisition, reduced overall costs, and improved subsurface characterization.
An Introduction to Using Surface Geophysics to Characterize Sand and Gravel Deposits
Lucius, Jeffrey E.; Langer, William H.; Ellefsen, Karl J.
2007-01-01
This report is an introduction to surface geophysical techniques that aggregate producers can use to characterize known deposits of sand and gravel. Five well-established and well-tested geophysical methods are presented: seismic refraction and reflection, resistivity, ground penetrating radar, time-domain electromagnetism, and frequency-domain electromagnetism. Depending on site conditions and the selected method(s), geophysical surveys can provide information concerning areal extent and thickness of the deposit, thickness of overburden, depth to the water table, critical geologic contacts, and location and correlation of geologic features. In addition, geophysical surveys can be conducted prior to intensive drilling to help locate auger or drill holes, reduce the number of drill holes required, calculate stripping ratios to help manage mining costs, and provide continuity between sampling sites to upgrade the confidence of reserve calculations from probable reserves to proved reserves. Perhaps the greatest value of geophysics to aggregate producers may be the speed of data acquisition, reduced overall costs, and improved subsurface characterization.
Analysis of injury types for mixed martial arts athletes
Ji, MinJoon
2016-01-01
[Purpose] The purpose of the present study was to examine the types of injuries associated with mixed martial arts and their location in order to provide substantial information to help reduce the risk of these injuries during mixed martial arts. [Subjects and Methods] Data were collected from 455 mixed martial arts athletes who practiced mixed martial arts or who participated in mixed martial arts competitions in the Seoul Metropolitan City and Gyeongnam Province of Korea between June 3, 2015, and November 6, 2015. Questionnaires were used to collect the data. The convenience sampling method was used, based on the non-probability sampling extraction method. [Results] The arm, neck, and head were the most frequent locations of the injuries; and lacerations, concussions, and contusions were the most frequently diagnosed types of injuries in the mixed martial arts athletes in this study. [Conclusion] Reducing the risk of injury by establishing an alert system and preventing critical injuries by incorporating safety measures are important. PMID:27313367
Analysis of injury types for mixed martial arts athletes.
Ji, MinJoon
2016-05-01
[Purpose] The purpose of the present study was to examine the types of injuries associated with mixed martial arts and their location in order to provide substantial information to help reduce the risk of these injuries during mixed martial arts. [Subjects and Methods] Data were collected from 455 mixed martial arts athletes who practiced mixed martial arts or who participated in mixed martial arts competitions in the Seoul Metropolitan City and Gyeongnam Province of Korea between June 3, 2015, and November 6, 2015. Questionnaires were used to collect the data. The convenience sampling method was used, based on the non-probability sampling extraction method. [Results] The arm, neck, and head were the most frequent locations of the injuries; and lacerations, concussions, and contusions were the most frequently diagnosed types of injuries in the mixed martial arts athletes in this study. [Conclusion] Reducing the risk of injury by establishing an alert system and preventing critical injuries by incorporating safety measures are important.
2014-01-01
with the adverse event’s potential impact , ranging from negligible to catastrophic. Appendix V includes a matrix of how USAID/Afghanistan assigns risk...International Development (USAID) assigns risk ratings based on potential impact and probability of occurrence of an identified risk. The impact measures...frequent. Combining impact and probability factors categorize risk clusters of critical, high, medium and low categories. Although subjective, it is
A critical analysis of high-redshift, massive, galaxy clusters. Part I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoyle, Ben; Jimenez, Raul; Verde, Licia
2012-02-01
We critically investigate current statistical tests applied to high redshift clusters of galaxies in order to test the standard cosmological model and describe their range of validity. We carefully compare a sample of high-redshift, massive, galaxy clusters with realistic Poisson sample simulations of the theoretical mass function, which include the effect of Eddington bias. We compare the observations and simulations using the following statistical tests: the distributions of ensemble and individual existence probabilities (in the > M, > z sense), the redshift distributions, and the 2d Kolmogorov-Smirnov test. Using seemingly rare clusters from Hoyle et al. (2011), and Jee etmore » al. (2011) and assuming the same survey geometry as in Jee et al. (2011, which is less conservative than Hoyle et al. 2011), we find that the ( > M, > z) existence probabilities of all clusters are fully consistent with ΛCDM. However assuming the same survey geometry, we use the 2d K-S test probability to show that the observed clusters are not consistent with being the least probable clusters from simulations at > 95% confidence, and are also not consistent with being a random selection of clusters, which may be caused by the non-trivial selection function and survey geometry. Tension can be removed if we examine only a X-ray selected sub sample, with simulations performed assuming a modified survey geometry.« less
Daneshkazemi, Alireza; Abrisham, Seyyed Mohammad; Daneshkazemi, Pedram; Davoudi, Amin
2016-01-01
Dental pain management is one of the most critical aspects of modern dentistry which might affect patient's quality of life. Several methods are suggested to provide a painless situation for patients. Desensitization of the oral site using topical anesthetics is one of those methods. The improvements of topical anesthetic agents are probably one of the most important advances in dental science in the past 100 years. Most of them are safe and can be applied on oral mucosa with minimal irritation and allergic reactions. At present, these agents are various with different potent and indications. Eutectic mixture of local anesthetics (EMLA) (lidocaine + prilocaine) is a commercial anesthetic agent which has got acceptance among dental clinicians. This article provides a brief review about the efficacy of EMLA as a topical anesthetic agent when used during dental procedures. PMID:27746520
NASA Astrophysics Data System (ADS)
Xu, Yan; Dong, Zhao Yang; Zhang, Rui; Wong, Kit Po
2014-02-01
Maintaining transient stability is a basic requirement for secure power system operations. Preventive control deals with modifying the system operating point to withstand probable contingencies. In this article, a decision tree (DT)-based on-line preventive control strategy is proposed for transient instability prevention of power systems. Given a stability database, a distance-based feature estimation algorithm is first applied to identify the critical generators, which are then used as features to develop a DT. By interpreting the splitting rules of DT, preventive control is realised by formulating the rules in a standard optimal power flow model and solving it. The proposed method is transparent in control mechanism, on-line computation compatible and convenient to deal with multi-contingency. The effectiveness and efficiency of the method has been verified on New England 10-machine 39-bus test system.
NASA Astrophysics Data System (ADS)
Taylor, Gabriel James
The failure of electrical cables exposed to severe thermal fire conditions are a safety concern for operating commercial nuclear power plants (NPPs). The Nuclear Regulatory Commission (NRC) has promoted the use of risk-informed and performance-based methods for fire protection which resulted in a need to develop realistic methods to quantify the risk of fire to NPP safety. Recent electrical cable testing has been conducted to provide empirical data on the failure modes and likelihood of fire-induced damage. This thesis evaluated numerous aspects of the data. Circuit characteristics affecting fire-induced electrical cable failure modes have been evaluated. In addition, thermal failure temperatures corresponding to cable functional failures have been evaluated to develop realistic single point thermal failure thresholds and probability distributions for specific cable insulation types. Finally, the data was used to evaluate the prediction capabilities of a one-dimension conductive heat transfer model used to predict cable failure.
Daneshkazemi, Alireza; Abrisham, Seyyed Mohammad; Daneshkazemi, Pedram; Davoudi, Amin
2016-01-01
Dental pain management is one of the most critical aspects of modern dentistry which might affect patient's quality of life. Several methods are suggested to provide a painless situation for patients. Desensitization of the oral site using topical anesthetics is one of those methods. The improvements of topical anesthetic agents are probably one of the most important advances in dental science in the past 100 years. Most of them are safe and can be applied on oral mucosa with minimal irritation and allergic reactions. At present, these agents are various with different potent and indications. Eutectic mixture of local anesthetics (EMLA) (lidocaine + prilocaine) is a commercial anesthetic agent which has got acceptance among dental clinicians. This article provides a brief review about the efficacy of EMLA as a topical anesthetic agent when used during dental procedures.
Phase transition in the countdown problem
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Luque, Bartolo
2012-07-01
We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.
Measuring public opinion on alcohol policy: a factor analytic study of a US probability sample.
Latimer, William W; Harwood, Eileen M; Newcomb, Michael D; Wagenaar, Alexander C
2003-03-01
Public opinion has been one factor affecting change in policies designed to reduce underage alcohol use. Extant research, however, has been criticized for using single survey items of unknown reliability to define adult attitudes on alcohol policy issues. The present investigation addresses a critical gap in the literature by deriving scales on public attitudes, knowledge, and concerns pertinent to alcohol policies designed to reduce underage drinking using a US probability sample survey of 7021 adults. Five attitudinal scales were derived from exploratory and confirmatory factor analyses addressing policies to: (1) regulate alcohol marketing, (2) regulate alcohol consumption in public places, (3) regulate alcohol distribution, (4) increase alcohol taxes, and (5) regulate youth access. The scales exhibited acceptable psychometric properties and were largely consistent with a rational framework which guided the survey construction.
NASA Technical Reports Server (NTRS)
Brenning, N.; Faelthammar, C.-G.; Marklund, G.; Haerendel, G.; Kelley, M. C.; Pfaff, R.
1991-01-01
The quasi-dc electric fields measured in the CRIT I ionospheric release experiment are studied. In the experiment, two identical barium shaped charges were fired toward a main payload, and three-dimensional measurements of the electric field inside the streams were made. The relevance of proposed mechanisms for electron heating in the critical ionization velocity (CIV) mechanism is addressed. It is concluded that both the 'homogeneous' and the 'ionizing front' models probably are valid, but in different parts of the streams. It is also possible that electrons are directly accelerated by a magnetic field-aligned component of the electric field. The coupling between the ambient ionosphere and the ionized barium stream is more complicated that is usually assumed in CIV theories, with strong magnetic-field-aligned electric fields and probably current limitation as important processes.
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, L.L.; Wilson, J.R.; Sanchez, L.C.
The United States Department of Energy Office of Environmental Management's (DOE/EM's) National Spent Nuclear Fuel Program (NSNFP), through a collaboration between Sandia National Laboratories (SNL) and Idaho National Engineering and Environmental Laboratory (INEEL), is conducting a systematic Nuclear Dynamics Consequence Analysis (NDCA) of the disposal of SNFs in an underground geologic repository sited in unsaturated tuff. This analysis is intended to provide interim guidance to the DOE for the management of the SNF while they prepare for final compliance evaluation. This report presents results from a Nuclear Dynamics Consequence Analysis (NDCA) that examined the potential consequences and risks of criticalitymore » during the long-term disposal of spent nuclear fuel owned by DOE-EM. This analysis investigated the potential of post-closure criticality, the consequences of a criticality excursion, and the probability frequency for post-closure criticality. The results of the NDCA are intended to provide the DOE-EM with a technical basis for measuring risk which can be used for screening arguments to eliminate post-closure criticality FEPs (features, events and processes) from consideration in the compliance assessment because of either low probability or low consequences. This report is composed of an executive summary (Volume 1), the methodology and results of the NDCA (Volume 2), and the applicable appendices (Volume 3).« less
NASA Astrophysics Data System (ADS)
Galushina, T. Yu.; Titarenko, E. Yu
2014-12-01
The purpose of this work is the investigation of probabilistic orbital evolution of near-Earth asteroids (NEA) moving in the vicinity of resonances with Mercury. In order to identify such objects the equations of all NEA motion have been integrated on the time interval (1000, 3000 years). The initial data has been taken from the E. Bowell catalog on February 2014. The motion equations have been integrated numerically by Everhart method. The resonance characteristics are critical argument that defines the connection longitude of the asteroid and the planet and its time derivative, called resonance "band". The study has identified 15 asteroids moving in the vicinity of different resonances with Mercury. Six of them (52381 1993 HA, 172034 2001 WR1, 2008 VB1, 2009 KT4, 2013 CQ35, 2013 TH) move in the vicinity of the resonance 1/6, five of them (142561 2002 TX68, 159608 2002 AC2, 241370 2008 LW8, 2006 UR216, 2009 XB2) move in the vicinity of the resonance 1/9 and one by one asteroid moves in the vicinity of resonances 1/3, 1/7, 1/8 and 2/7 (2006 SE6, 2002 CV46, 2013 CN35 and 2006 VY2 respectively). The orbits of all identified asteroids have been improved by least square method using the available optical observations and probabilistic orbital evolution has been investigated. Improvement have been carried out at the time of the best conditionality in accounting perturbations from the major planets, Pluto, Moon, Ceres, Pallas and Vesta, the relativistic effects from the Sun and the Solar oblateness. The estimation of the nonlinearity factor has showed that for all the considered NEA it does not exceed the critical value of 0.1, which makes it possible to use the linear method for constructing the initial probability domain. The domain has been built in the form of an ellipsoid in six-dimensional phase space of coordinates and velocity components on the base of the full covariance matrix, the center of ellipsoid is the nominal orbit obtained by improving. The 10 000 clones distributed according to the normal law has been chosen in the initial probability domain. The nonlinear method by numerical integration of the differential equations of each clone motion has been used for study of probabilistic orbital evolution. The force model has corresponded to the model used in the improvement. The time interval has been limited by ephemeris DE406 and accuracy of integration and has been amounted for different objects from two to six thousand years. As a result of the orbit improvement from the available optical positional observations it has been turned out that the orbits of NEA 2006 SE6, 2009 KT4, 2013 CQ35, 2013 TH, 2002 CV46, 2013 CN35 and 2006 VY2 are poorly defined, that does not allow to conclude about their resonance capture. The remaining objects can be divided into two classes. Asteroids 172034 2001 WR1, 2008 VB1, 159608 2002 AC2 and 2006 UR216 move in the vicinity of the resonance over the entire interval of the study. Probability domains of NEA 52381 1993 HA, 142561 2002 TX68, 241370 2008 LW8 и 2009 XB2 are increase significantly under the influence of close encounters, and part of clones are out of resonance. It should be noted that for all the considered objects the critical argument varies around the moving center of libration or circulates that suggests instability resonance.
Probability techniques for reliability analysis of composite materials
NASA Technical Reports Server (NTRS)
Wetherhold, Robert C.; Ucci, Anthony M.
1994-01-01
Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.
A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.
Faya, Paul; Stamey, James D; Seaman, John W
2017-01-01
For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
NASA Astrophysics Data System (ADS)
Fanos, Ali Mutar; Pradhan, Biswajeet
2018-04-01
Rockfall poses risk to people, their properties and to transportation ways in mountainous and hilly regions. This catastrophe shows various characteristics such as vast distribution, sudden occurrence, variable magnitude, strong fatalness and randomicity. Therefore, prediction of rockfall phenomenon both spatially and temporally is a challenging task. Digital Terrain model (DTM) is one of the most significant elements in rockfall source identification and risk assessment. Light detection and ranging (LiDAR) is the most advanced effective technique to derive high-resolution and accurate DTM. This paper presents a critical overview of rockfall phenomenon (definition, triggering factors, motion modes and modeling) and LiDAR technique in terms of data pre-processing, DTM generation and the factors that can be obtained from this technique for rockfall source identification and risk assessment. It also reviews the existing methods that are utilized for the evaluation of the rockfall trajectories and their characteristics (frequency, velocity, bouncing height and kinetic energy), probability, susceptibility, hazard and risk. Detail consideration is given on quantitative methodologies in addition to the qualitative ones. Various methods are demonstrated with respect to their application scales (local and regional). Additionally, attention is given to the latest improvement, particularly including the consideration of the intensity of the phenomena and the magnitude of the events at chosen sites.
Gong, Xingchu; Zhang, Ying; Pan, Jianyang; Qu, Haibin
2014-01-01
A solvent recycling reflux extraction process for Panax notoginseng was optimized using a design space approach to improve the batch-to-batch consistency of the extract. Saponin yields, total saponin purity, and pigment yield were defined as the process critical quality attributes (CQAs). Ethanol content, extraction time, and the ratio of the recycling ethanol flow rate and initial solvent volume in the extraction tank (RES) were identified as the critical process parameters (CPPs) via quantitative risk assessment. Box-Behnken design experiments were performed. Quadratic models between CPPs and process CQAs were developed, with determination coefficients higher than 0.88. As the ethanol concentration decreases, saponin yields first increase and then decrease. A longer extraction time leads to higher yields of the ginsenosides Rb1 and Rd. The total saponin purity increases as the ethanol concentration increases. The pigment yield increases as the ethanol concentration decreases or extraction time increases. The design space was calculated using a Monte-Carlo simulation method with an acceptable probability of 0.90. Normal operation ranges to attain process CQA criteria with a probability of more than 0.914 are recommended as follows: ethanol content of 79–82%, extraction time of 6.1–7.1 h, and RES of 0.039–0.040 min−1. Most of the results of the verification experiments agreed well with the predictions. The verification experiment results showed that the selection of proper operating ethanol content, extraction time, and RES within the design space can ensure that the CQA criteria are met. PMID:25470598
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sensitivity and cost considerations for the detection and eradication of marine pests in ports.
Hayes, Keith R; Cannon, Rob; Neil, Kerry; Inglis, Graeme
2005-08-01
Port surveys are being conducted in Australia, New Zealand and around the world to confirm the presence or absence of particular marine pests. The most critical aspect of these surveys is their sensitivity-the probability that they will correctly identify a species as present if indeed it is present. This is not, however, adequately addressed in the relevant national and international standards. Simple calculations show that the sensitivity of port survey methods is closely related to their encounter rate-the average number of target individuals expected to be detected by the method. The encounter rate (which reflects any difference in relative pest density), divided by the cost of the method, provides one way to compare the cost-effectiveness of different survey methods. The most cost-effective survey method is site- and species-specific but, in general, will involve sampling from the habitat with the highest expected population of target individuals. A case study of Perna viridis in Trinity Inlet, Cairns, demonstrates that plankton trawls processed with gene probes provide the same level of sensitivity for a fraction of the cost associated with the next best available method-snorkel transects in bad visibility (secchi depth=0.72 m). Visibility and the adult/larvae ratio, however, are critical to these arguments. If visibility were good (secchi depth=10 m), the two approaches would be comparable. Diver deployed quadrats were at least three orders of magnitude less cost-effective in this case study. It is very important that environmental managers and scientists perform sensitivity calculations before embarking on port surveys to ensure the highest level of sensitivity is achieved for any given budget.
Raw material ‘criticality’—sense or nonsense?
NASA Astrophysics Data System (ADS)
Frenzel, M.; Kullik, J.; Reuter, M. A.; Gutzmer, J.
2017-03-01
The past decade has seen a resurgence of interest in the supply security of mineral raw materials. A key to the current debate is the concept of ‘criticality’. The present article reviews the criticality concept, as well as the methodologies used in its assessment, including a critical evaluation of their validity in view of classical risk theory. Furthermore, it discusses a number of risks present in global raw materials markets that are not captured by most criticality assessments. Proposed measures for the alleviation of these risks are also presented. We find that current assessments of raw material criticality are fundamentally flawed in several ways. This is mostly due to a lack of adherence to risk theory, and highly limits their applicability. Many of the raw materials generally identified as critical are probably not critical. Still, the flaws of current assessments do not mean that the general issue of supply security can simply be ignored. Rather, it implies that new assessments are required. While the basic theoretical framework for such assessments is outlined in this review, detailed method development will require a major collaborative effort between different disciplines along the raw materials value chain. In the opinion of the authors, the greatest longer-term challenge in the raw materials sector is to stop, or counteract the effects of, the escalation of unit energy costs of production. This issue is particularly pressing due to its close link with the renewable energy transition, requiring more metal and mineral raw materials per unit energy produced. The solution to this problem will require coordinated policy action, as well as the collaboration of scientists from many different fields—with physics, as well as the materials and earth sciences in the lead.
Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays
NASA Astrophysics Data System (ADS)
Badikov, Sergei; Chechev, Valery
2017-09-01
In assumption of well installed decay scheme the method provides a) exact balance relationships, b) lower (compared to the traditional techniques) uncertainties of recommended absolute emission probabilities of particles and gamma rays, c) evaluation of correlations between the recommended emission probabilities (for the same and different decay modes). Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru
2017-01-16
A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Multiclass Posterior Probability Twin SVM for Motor Imagery EEG Classification.
She, Qingshan; Ma, Yuliang; Meng, Ming; Luo, Zhizeng
2015-01-01
Motor imagery electroencephalography is widely used in the brain-computer interface systems. Due to inherent characteristics of electroencephalography signals, accurate and real-time multiclass classification is always challenging. In order to solve this problem, a multiclass posterior probability solution for twin SVM is proposed by the ranking continuous output and pairwise coupling in this paper. First, two-class posterior probability model is constructed to approximate the posterior probability by the ranking continuous output techniques and Platt's estimating method. Secondly, a solution of multiclass probabilistic outputs for twin SVM is provided by combining every pair of class probabilities according to the method of pairwise coupling. Finally, the proposed method is compared with multiclass SVM and twin SVM via voting, and multiclass posterior probability SVM using different coupling approaches. The efficacy on the classification accuracy and time complexity of the proposed method has been demonstrated by both the UCI benchmark datasets and real world EEG data from BCI Competition IV Dataset 2a, respectively.
NASA DOEPOD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. Although 0.90 POD with 95% confidence at critical flaw sizes is often stated as an inspection requirement in inspection documents, including NASA Standards, NASA critical aerospace applications have historically only accepted 0.978 POD or better with a 95% one-sided lower confidence bound exceeding 0.90 at critical flaw sizes, a90/95.
Inferring drug-disease associations based on known protein complexes.
Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin
2015-01-01
Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.
AIS-2 radiometry and a comparison of methods for the recovery of ground reflectance
NASA Technical Reports Server (NTRS)
Conel, James E.; Green, Robert O.; Vane, Gregg; Bruegge, Carol J.; Alley, Ronald E.; Curtiss, Brian J.
1987-01-01
A field experiment and its results involving Airborne Imaging Spectrometer-2 data are described. The radiometry and spectral calibration of the instrument are critically examined in light of laboratory and field measurements. Three methods of compensating for the atmosphere in the search for ground reflectance are compared. It was found that laboratory determined responsitivities are 30 to 50 percent less than expected for conditions of the flight for both short and long wavelength observations. The combined system atmosphere surface signal to noise ratio, as indexed by the mean response divided by the standard deviation for selected areas, lies between 40 and 110, depending upon how scene averages are taken, and is 30 percent less for flight conditions than for laboratory. Atmospheric and surface variations may contribute to this difference. It is not possible to isolate instrument performance from the present data. As for methods of data reduction, the so-called scene average or log-residual method fails to recover any feature present in the surface reflectance, probably because of the extreme homogeneity of the scene.
Oliveri, Paolo; López, M Isabel; Casolino, M Chiara; Ruisánchez, Itziar; Callao, M Pilar; Medini, Luca; Lanteri, Silvia
2014-12-03
A new class-modeling method, referred to as partial least squares density modeling (PLS-DM), is presented. The method is based on partial least squares (PLS), using a distance-based sample density measurement as the response variable. Potential function probability density is subsequently calculated on PLS scores and used, jointly with residual Q statistics, to develop efficient class models. The influence of adjustable model parameters on the resulting performances has been critically studied by means of cross-validation and application of the Pareto optimality criterion. The method has been applied to verify the authenticity of olives in brine from cultivar Taggiasca, based on near-infrared (NIR) spectra recorded on homogenized solid samples. Two independent test sets were used for model validation. The final optimal model was characterized by high efficiency and equilibrate balance between sensitivity and specificity values, if compared with those obtained by application of well-established class-modeling methods, such as soft independent modeling of class analogy (SIMCA) and unequal dispersed classes (UNEQ). Copyright © 2014 Elsevier B.V. All rights reserved.
Inferring drug-disease associations based on known protein complexes
2015-01-01
Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949
Projecting adverse event incidence rates using empirical Bayes methodology.
Ma, Guoguang Julie; Ganju, Jitendra; Huang, Jing
2016-08-01
Although there is considerable interest in adverse events observed in clinical trials, projecting adverse event incidence rates in an extended period can be of interest when the trial duration is limited compared to clinical practice. A naïve method for making projections might involve modeling the observed rates into the future for each adverse event. However, such an approach overlooks the information that can be borrowed across all the adverse event data. We propose a method that weights each projection using a shrinkage factor; the adverse event-specific shrinkage is a probability, based on empirical Bayes methodology, estimated from all the adverse event data, reflecting evidence in support of the null or non-null hypotheses. Also proposed is a technique to estimate the proportion of true nulls, called the common area under the density curves, which is a critical step in arriving at the shrinkage factor. The performance of the method is evaluated by projecting from interim data and then comparing the projected results with observed results. The method is illustrated on two data sets. © The Author(s) 2013.
Space shuttle solid rocket booster recovery system definition, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.
Red List of spiders (araneae) of the Wadden Sea Area
NASA Astrophysics Data System (ADS)
Vangsgård, C.; Reinke, H.-D.; Schultz, W.; van Helsdingen, P. J.
1996-10-01
In the Wadden Sea, in total, 55 species of spiders are threatened in at least one subregion. Of these, 50 species are threatened in the entire area and are therefore placed on the trilateral Red List. According to the present knowledge, no species of the listed spiders are extinct in the entire Wadden Sea area. The status of 3 species of spiders is (probably) critical; 12 species are endangered; the status of 30 species is (probably) vulnerable and of 6 species susceptible.
NASA Astrophysics Data System (ADS)
Zhang, Jiaxin; Shields, Michael D.
2018-01-01
This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
NASA Astrophysics Data System (ADS)
Malarz, K.; Szvetelszky, Z.; Szekf, B.; Kulakowski, K.
2006-11-01
We consider the average probability X of being informed on a gossip in a given social network. The network is modeled within the random graph theory of Erd{õ}s and Rényi. In this theory, a network is characterized by two parameters: the size N and the link probability p. Our experimental data suggest three levels of social inclusion of friendship. The critical value pc, for which half of agents are informed, scales with the system size as N-gamma with gamma approx 0.68. Computer simulations show that the probability X varies with p as a sigmoidal curve. Influence of the correlations between neighbors is also evaluated: with increasing clustering coefficient C, X decreases.
Does the rapid appearance of life on Earth suggest that life is common in the universe?
Lineweaver, Charles H; Davis, Tamara M
2002-01-01
It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe.
Species survival and scaling laws in hostile and disordered environments
NASA Astrophysics Data System (ADS)
Rocha, Rodrigo P.; Figueiredo, Wagner; Suweis, Samir; Maritan, Amos
2016-10-01
In this work we study the likelihood of survival of single-species in the context of hostile and disordered environments. Population dynamics in this environment, as modeled by the Fisher equation, is characterized by negative average growth rate, except in some random spatially distributed patches that may support life. In particular, we are interested in the phase diagram of the survival probability and in the critical size problem, i.e., the minimum patch size required for surviving in the long-time dynamics. We propose a measure for the critical patch size as being proportional to the participation ratio of the eigenvector corresponding to the largest eigenvalue of the linearized Fisher dynamics. We obtain the (extinction-survival) phase diagram and the probability distribution function (PDF) of the critical patch sizes for two topologies, namely, the one-dimensional system and the fractal Peano basin. We show that both topologies share the same qualitative features, but the fractal topology requires higher spatial fluctuations to guarantee species survival. We perform a finite-size scaling and we obtain the associated scaling exponents. In addition, we show that the PDF of the critical patch sizes has an universal shape for the 1D case in terms of the model parameters (diffusion, growth rate, etc.). In contrast, the diffusion coefficient has a drastic effect on the PDF of the critical patch sizes of the fractal Peano basin, and it does not obey the same scaling law of the 1D case.
The German medical dissertation--time to change?
Diez, C; Arkenau, C; Meyer-Wentrup, F
2000-08-01
German medical students must conduct a research project and write a dissertation in order to receive the title "Doctor." However, the dissertation is not required to graduate, enter a residency, or practice medicine. About 90% of practicing physicians hold the title "Doctor"; a career in academic medicine almost always requires it. Although no convincing evidence supports the usefulness of the dissertation, many regard its completion as important to maintaining a high level of scientific competence and patient care. In recent years, the number of successfully completed dissertations has declined. Lack of time during medical school, the perceived irrelevance of the dissertation to medical practice, and the poor design of many projects may be at least part of the problem. There is also increasing evidence that conducting research frequently delays graduation and may affect clinical skills because students working on projects attend fewer classes, ward rounds, and clinical tutorials and do not spent sufficient time preparing for examinations. The scientific value of students' research has also been criticized; critics point out that students do not have enough time or experience to critically analyze methods and data, and they often are not properly supervised. European unification will probably lead to standardized requirements for medical education and research. The authors hope this will eliminate the dissertation requirement in Germany.
Imprecise Probability Methods for Weapons UQ
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.
One- to four-year-olds connect diverse positive emotional vocalizations to their probable causes
Wu, Yang; Muentener, Paul; Schulz, Laura E.
2017-01-01
The ability to understand why others feel the way they do is critical to human relationships. Here, we show that emotion understanding in early childhood is more sophisticated than previously believed, extending well beyond the ability to distinguish basic emotions or draw different inferences from positively and negatively valenced emotions. In a forced-choice task, 2- to 4-year-olds successfully identified probable causes of five distinct positive emotional vocalizations elicited by what adults would consider funny, delicious, exciting, sympathetic, and adorable stimuli (Experiment 1). Similar results were obtained in a preferential looking paradigm with 12- to 23-month-olds, a direct replication with 18- to 23-month-olds (Experiment 2), and a simplified design with 12- to 17-month-olds (Experiment 3; preregistered). Moreover, 12- to 17-month-olds selectively explored, given improbable causes of different positive emotional reactions (Experiments 4 and 5; preregistered). The results suggest that by the second year of life, children make sophisticated and subtle distinctions among a wide range of positive emotions and reason about the probable causes of others’ emotional reactions. These abilities may play a critical role in developing theory of mind, social cognition, and early relationships. PMID:29078315
Fuzzy-information-based robustness of interconnected networks against attacks and failures
NASA Astrophysics Data System (ADS)
Zhu, Qian; Zhu, Zhiliang; Wang, Yifan; Yu, Hai
2016-09-01
Cascading failure is fatal in applications and its investigation is essential and therefore became a focal topic in the field of complex networks in the last decade. In this paper, a cascading failure model is established for interconnected networks and the associated data-packet transport problem is discussed. A distinguished feature of the new model is its utilization of fuzzy information in resisting uncertain failures and malicious attacks. We numerically find that the giant component of the network after failures increases with tolerance parameter for any coupling preference and attacking ambiguity. Moreover, considering the effect of the coupling probability on the robustness of the networks, we find that the robustness of the assortative coupling and random coupling of the network model increases with the coupling probability. However, for disassortative coupling, there exists a critical phenomenon for coupling probability. In addition, a critical value that attacking information accuracy affects the network robustness is observed. Finally, as a practical example, the interconnected AS-level Internet in South Korea and Japan is analyzed. The actual data validates the theoretical model and analytic results. This paper thus provides some guidelines for preventing cascading failures in the design of architecture and optimization of real-world interconnected networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Pfenninger, Stefan
In this paper, we propose a strategy to control the self-organizing dynamics of the Bak-Tang-Wiesenfeld (BTW) sandpile model on complex networks by allowing some degree of failure tolerance for the nodes and introducing additional active dissipation while taking the risk of possible node damage. We show that the probability for large cascades significantly increases or decreases respectively when the risk for node damage outweighs the active dissipation and when the active dissipation outweighs the risk for node damage. By considering the potential additional risk from node damage, a non-trivial optimal active dissipation control strategy which minimizes the total cost inmore » the system can be obtained. Under some conditions the introduced control strategy can decrease the total cost in the system compared to the uncontrolled model. Moreover, when the probability of damaging a node experiencing failure tolerance is greater than the critical value, then no matter how successful the active dissipation control is, the total cost of the system will have to increase. This critical damage probability can be used as an indicator of the robustness of a network or system. Copyright (C) EPLA, 2015« less
The estimation of tree posterior probabilities using conditional clade probability distributions.
Larget, Bret
2013-07-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample.
Anticipating abrupt shifts in temporal evolution of probability of eruption
NASA Astrophysics Data System (ADS)
Rohmer, J.; Loschetter, A.
2016-04-01
Estimating the probability of eruption by jointly accounting for different sources of monitoring parameters over time is a key component for volcano risk management. In the present study, we are interested in the transition from a state of low-to-moderate probability value to a state of high probability value. By using the data of MESIMEX exercise at the Vesuvius volcano, we investigated the potential for time-varying indicators related to the correlation structure or to the variability of the probability time series for detecting in advance this critical transition. We found that changes in the power spectra and in the standard deviation estimated over a rolling time window both present an abrupt increase, which marks the approaching shift. Our numerical experiments revealed that the transition from an eruption probability of 10-15% to > 70% could be identified up to 1-3 h in advance. This additional lead time could be useful to place different key services (e.g., emergency services for vulnerable groups, commandeering additional transportation means, etc.) on a higher level of alert before the actual call for evacuation.
Methods, apparatus and system for notification of predictable memory failure
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2017-01-03
A method for providing notification of a predictable memory failure includes the steps of: obtaining information regarding at least one condition associated with a memory; calculating a memory failure probability as a function of the obtained information; calculating a failure probability threshold; and generating a signal when the memory failure probability exceeds the failure probability threshold, the signal being indicative of a predicted future memory failure.
Simulation of Blast on Porcine Head
2015-07-01
human cadaver heads (Wayne State Tolerance Curve), and concussive data from animals as well as long-duration human sled experiments have led to the...99% probability of producing concussion in Rhesus monkeys (whiplash injury on the sagittal plane) (Ommaya et al. 1967). However, since a single...correlated to brain injury—the critical rotation velocity ωcr = 42.1 rad/s and the critical acceleration αcr = 363 krad/ s2 for college football data
A Model for Assessing the Liability of Seemingly Correct Software
NASA Technical Reports Server (NTRS)
Voas, Jeffrey M.; Voas, Larry K.; Miller, Keith W.
1991-01-01
Current research on software reliability does not lend itself to quantitatively assessing the risk posed by a piece of life-critical software. Black-box software reliability models are too general and make too many assumptions to be applied confidently to assessing the risk of life-critical software. We present a model for assessing the risk caused by a piece of software; this model combines software testing results and Hamlet's probable correctness model. We show how this model can assess software risk for those who insure against a loss that can occur if life-critical software fails.
Cosmological implications of Higgs near-criticality.
Espinosa, J R
2018-03-06
The Standard Model electroweak (EW) vacuum, in the absence of new physics below the Planck scale, lies very close to the boundary between stability and metastability, with the last option being the most probable. Several cosmological implications of this so-called 'near-criticality' are discussed. In the metastable vacuum case, the main challenges that the survival of the EW vacuum faces during the evolution of the Universe are analysed. In the stable vacuum case, the possibility of implementing Higgs inflation is critically examined.This article is part of the Theo Murphy meeting issue 'Higgs cosmology'. © 2018 The Author(s).
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
1991-06-01
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Wind/tornado design criteria, development to achieve required probabilistic performance goals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, D.S.
This paper describes the strategy for developing new design criteria for a critical facility to withstand loading induced by the wind/tornado hazard. The proposed design requirements for resisting wind/tornado loads are based on probabilistic performance goals. The proposed design criteria were prepared by a Working Group consisting of six experts in wind/tornado engineering and meteorology. Utilizing their best technical knowledge and judgment in the wind/tornado field, they met and discussed the methodologies and reviewed available data. A review of the available wind/tornado hazard model for the site, structural response evaluation methods, and conservative acceptance criteria lead to proposed design criteriamore » that has a high probability of achieving the required performance goals.« less
Therapeutic Hypothermia: Critical Review of the Molecular Mechanisms of Action
González-Ibarra, Fernando Pavel; Varon, Joseph; López-Meza, Elmer G.
2010-01-01
Therapeutic hypothermia (TH) is nowadays one of the most important methods of neuroprotection. The events that occur after an episode of ischemia are multiple and hypothermia can affect the various steps of this cascade. The mechanisms of action of TH are varied and the possible explanation for the benefits of this therapy is probably the multiple mechanisms of action blocking the cascade of ischemia on many levels. TH can affect many metabolic pathways, reactions of inflammation, apoptosis processes, and promote neuronal integrity. To know the mechanisms of action of TH will allow a better understanding about the indications for this therapy and the possibility of searching for other therapies when used in conjunction with hypothermia will provide a therapeutic synergistic effect. PMID:21331282
Quantification of network structural dissimilarities.
Schieber, Tiago A; Carpi, Laura; Díaz-Guilera, Albert; Pardalos, Panos M; Masoller, Cristina; Ravetti, Martín G
2017-01-09
Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.
Characterization of emission properties of Er3+ ions in TeO2-CdF2-WO3 glasses.
Bilir, G; Mustafaoglu, N; Ozen, G; DiBartolo, B
2011-12-01
TeO(2)-CdF(2)-WO(3) glasses with various compositions and Er(3+) concentrations were prepared by conventional melting method. Their optical properties were studied by measuring the absorption, luminescence spectra and the decay patterns at room temperature. From the optical absorption spectra the Judd-Ofelt parameters (Ω(t)), transition probabilities, branching ratios of various transitions, and radiative lifetimes were calculated. The absorption and emission cross-section spectra of the (4)I(15/2) to (4)I(13/2) transition of erbium were determined. Emission quantum efficiencies and the average critical distance R(0) which provides a measure for the strength of cross relaxation were determined. Copyright © 2011 Elsevier B.V. All rights reserved.
Traino, A C; Marcatili, S; Avigo, C; Sollini, M; Erba, P A; Mariani, G
2013-04-01
Nonuniform activity within the target lesions and the critical organs constitutes an important limitation for dosimetric estimates in patients treated with tumor-seeking radiopharmaceuticals. The tumor control probability and the normal tissue complication probability are affected by the distribution of the radionuclide in the treated organ/tissue. In this paper, a straightforward method for calculating the absorbed dose at the voxel level is described. This new method takes into account a nonuniform activity distribution in the target/organ. The new method is based on the macroscopic S-values (i.e., the S-values calculated for the various organs, as defined in the MIRD approach), on the definition of the number of voxels, and on the raw-count 3D array, corrected for attenuation, scatter, and collimator resolution, in the lesion/organ considered. Starting from these parameters, the only mathematical operation required is to multiply the 3D array by a scalar value, thus avoiding all the complex operations involving the 3D arrays. A comparison with the MIRD approach, fully described in the MIRD Pamphlet No. 17, using S-values at the voxel level, showed a good agreement between the two methods for (131)I and for (90)Y. Voxel dosimetry is becoming more and more important when performing therapy with tumor-seeking radiopharmaceuticals. The method presented here does not require calculating the S-values at the voxel level, and thus bypasses the mathematical problems linked to the convolution of 3D arrays and to the voxel size. In the paper, the results obtained with this new simplified method as well as the possibility of using it for other radionuclides commonly employed in therapy are discussed. The possibility of using the correct density value of the tissue/organs involved is also discussed.
Reliability analysis of redundant systems. [a method to compute transition probabilities
NASA Technical Reports Server (NTRS)
Yeh, H. Y.
1974-01-01
A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.
Spacecraft Robustness to Orbital Debris: Guidelines & Recommendations
NASA Astrophysics Data System (ADS)
Heinrich, S.; Legloire, D.; Tromba, A.; Tholot, M.; Nold, O.
2013-09-01
The ever increasing number of orbital debris has already led the space community to implement guidelines and requirements for "cleaner" and "safer" space operations as non-debris generating missions and end of mission disposal in order to get preserved orbits rid of space junks. It is nowadays well-known that man-made orbital debris impacts are now a higher threat than natural micro-meteoroids and that recent events intentionally or accidentally generated so many new debris that may initiate a cascade chain effect known as "the Kessler Syndrome" potentially jeopardizing the useful orbits.The main recommendations on satellite design is to demonstrate an acceptable Probability of Non-Penetration (PNP) with regard to small population (<5cm) of MMOD (Micro-Meteoroids and Orbital Debris). Compliance implies to think about spacecraft robustness as redundancies, segregations and shielding devices (as implemented in crewed missions but in a more complex mass - cost - criticality trade- off). Consequently the need is non-only to demonstrate the PNP compliance requirement but also the PNF (probability of Non-Failure) per impact location on all parts of the vehicle and investigate the probabilities for the different fatal scenarios: loss of mission, loss of spacecraft (space environment critical) and spacecraft fragmentation (space environment catastrophic).The recent THALES experience known on ESA Sentinel-3, of increasing need of robustness has led the ALTRAN company to initiate an internal innovative working group on those topics which conclusions may be attractive for their prime manufacturer customers.The intention of this paper is to present a status of this study : * Regulations, requirements and tools available * Detailed FMECA studies dedicated specifically to the MMOD risks with the introduction of new of probability and criticality classification scales. * Examples of design risks assessment with regard to the specific MMOD impact risks. * Lessons learnt on robustness survivability of systems (materials, shieldings, rules) coming from other industrial domains (automotive, military vehicles) * Guidelines and Recommendations implementable on satellite systems and mechanical architecture.
The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions
Larget, Bret
2013-01-01
In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066
Rácil, Z; Kocmanová, I; Wagnerová, B; Winterová, J; Lengerová, M; Moulis, M; Mayer, J
2008-01-01
PREMISES AND OBJECTIVES: Timely diagnosis is of critical importance for the prognosis of invasive aspergilosis (IA) patients. Over recent years, IA detection of galactomannan using the ELISA method has assumed growing importance. The objective of the study was to analyse the usability of the method in current clinical practice of a hemato-oncological ward. From May 2003 to October 2006, blood samples were taken from patients at IA risk to detect galactomannan (GM) in serum using the ELISA method. The patients who underwent the tests were classified by the probability of IA presence on the basis of the results of conventional diagnostic methods and section findings. A total of 11,360 serum samples from 911 adult patients were tested for GM presence. IA (probable/proven) was diagnosed in 42 (4.6%) of them. The rates of sensitivity, specificity, positive and negative predictive value of galactomannan detection for IA diagnosis in our ward were, respectively, 95.2%, 90.0%, 31.5% and 99.7%. The principal causes of the limited positive predictive value of the test were the high percentage of false-positive test results (mainly caused by concomitant administration of some penicillin antibiotics or Plasma-Lyte infusion solution), as well as the fact that a large percentage of patients we examined fell within the group of patients with hematological malignity with a very low prevalence of IA. GM detection in serum is associated with high sensitivity and excellent negative predictive value in IA diagnosis in hemato-oncological patients. Knowledge and elimination of possible causes of false-positive results as well as focusing the screening on patients at greatest risk of infection are necessary for an even better exploitation of the test.
Suryawanshi, Kulbhushansingh R; Bhatnagar, Yash Veer; Mishra, Charudutt
2012-07-01
Mountain ungulates around the world have been threatened by illegal hunting, habitat modification, increased livestock grazing, disease and development. Mountain ungulates play an important functional role in grasslands as primary consumers and as prey for wild carnivores, and monitoring of their populations is important for conservation purposes. However, most of the several currently available methods of estimating wild ungulate abundance are either difficult to implement or too expensive for mountainous terrain. A rigorous method of sampling ungulate abundance in mountainous areas that can allow for some measure of sampling error is therefore much needed. To this end, we used a combination of field data and computer simulations to test the critical assumptions associated with double-observer technique based on capture-recapture theory. The technique was modified and adapted to estimate the populations of bharal (Pseudois nayaur) and ibex (Capra sibirica) at five different sites. Conducting the two double-observer surveys simultaneously led to underestimation of the population by 15%. We therefore recommend separating the surveys in space or time. The overall detection probability for the two observers was 0.74 and 0.79. Our surveys estimated mountain ungulate populations (± 95% confidence interval) of 735 (± 44), 580 (± 46), 509 (± 53), 184 (± 40) and 30 (± 14) individuals at the five sites, respectively. A detection probability of 0.75 was found to be sufficient to detect a change of 20% in populations of >420 individuals. Based on these results, we believe that this method is sufficiently precise for scientific and conservation purposes and therefore recommend the use of the double-observer approach (with the two surveys separated in time or space) for the estimation and monitoring of mountain ungulate populations.
Monthly fire behavior patterns
Mark J. Schroeder; Craig C. Chandler
1966-01-01
From tabulated frequency distributions of fire danger indexes for a nationwide network of 89 stations, the probabilities of four types of fire behavior ranging from 'fire out' to 'critical' were calculated for each month and are shown in map form.
NASA Astrophysics Data System (ADS)
Whiteson, Daniel
2017-09-01
Most Americans probably don’t know the difference between nuclear physics and particle physics - they think it’s all atomic bombs and radiation-poisoned fish that glow sickly green in the dark - but for me, it’s a critical distinction.
Clonal Expansion (CE) Models in Cancer Risk Assessment
Cancer arises when cells accumulate sufficient critical mutations. Carcinogens increase the probability of mutation during cell division or promote clonal expansion within stages. Multistage CE models recapitulate this process and provide a framework for incorporating relevant da...
Fractional Brownian motion and the critical dynamics of zipping polymers.
Walter, J-C; Ferrantini, A; Carlon, E; Vanderzande, C
2012-03-01
We consider two complementary polymer strands of length L attached by a common-end monomer. The two strands bind through complementary monomers and at low temperatures form a double-stranded conformation (zipping), while at high temperature they dissociate (unzipping). This is a simple model of DNA (or RNA) hairpin formation. Here we investigate the dynamics of the strands at the equilibrium critical temperature T=T(c) using Monte Carlo Rouse dynamics. We find that the dynamics is anomalous, with a characteristic time scaling as τ∼L(2.26(2)), exceeding the Rouse time ∼L(2.18). We investigate the probability distribution function, velocity autocorrelation function, survival probability, and boundary behavior of the underlying stochastic process. These quantities scale as expected from a fractional Brownian motion with a Hurst exponent H=0.44(1). We discuss similarities to and differences from unbiased polymer translocation.
Effects of shifts in the rate of repetitive stimulation on sustained attention
NASA Technical Reports Server (NTRS)
Krulewitz, J. E.; Warm, J. S.; Wohl, T. H.
1975-01-01
The effects of shifts in the rate of presentation of repetitive neutral events (background event rate) were studied in a visual vigilance task. Four groups of subjects experienced either a high (21 events/min) or a low (6 events/min) event rate for 20 min and then experienced either the same or the alternate event rate for an additional 40 min. The temporal occurrence of critical target signals was identical for all groups, irrespective of event rate. The density of critical signals was 12 signals/20 min. By the end of the session, shifts in event rate were associated with changes in performance which resembled contrast effects found in other experimental situations in which shift paradigms were used. Relative to constant event rate control conditions, a shift from a low to a high event rate depressed the probability of signal detections, while a shift in the opposite direction enhanced the probability of signal detections.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
Yura, Harold T; Hanson, Steen G
2012-04-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Statistical context shapes stimulus-specific adaptation in human auditory cortex.
Herrmann, Björn; Henry, Molly J; Fromboluti, Elisa Kim; McAuley, J Devin; Obleser, Jonas
2015-04-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. Copyright © 2015 the American Physiological Society.
Jensen, Ingelise; Carl, Jesper; Lund, Bente; Larsen, Erik H; Nielsen, Jane
2011-01-01
Dose escalation in prostate radiotherapy is limited by normal tissue toxicities. The aim of this study was to assess the impact of margin size on tumor control and side effects for intensity-modulated radiation therapy (IMRT) and 3D conformal radiotherapy (3DCRT) treatment plans with increased dose. Eighteen patients with localized prostate cancer were enrolled. 3DCRT and IMRT plans were compared for a variety of margin sizes. A marker detectable on daily portal images was presupposed for narrow margins. Prescribed dose was 82 Gy within 41 fractions to the prostate clinical target volume (CTV). Tumor control probability (TCP) calculations based on the Poisson model including the linear quadratic approach were performed. Normal tissue complication probability (NTCP) was calculated for bladder, rectum and femoral heads according to the Lyman-Kutcher-Burman method. All plan types presented essentially identical TCP values and very low NTCP for bladder and femoral heads. Mean doses for these critical structures reached a minimum for IMRT with reduced margins. Two endpoints for rectal complications were analyzed. A marked decrease in NTCP for IMRT plans with narrow margins was seen for mild RTOG grade 2/3 as well as for proctitis/necrosis/stenosis/fistula, for which NTCP <7% was obtained. For equivalent TCP values, sparing of normal tissue was demonstrated with the narrow margin approach. The effect was more pronounced for IMRT than 3DCRT, with respect to NTCP for mild, as well as severe, rectal complications. Copyright © 2011 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
An oilspill risk analysis for the Mid-Atlantic (proposed sale 76) outer continental shelf lease area
Samuels, W.B.; Hopkins, Dorothy
1982-01-01
An oilspill risk analysis was conducted for the mid-Atlantic (proposed sale 76) Outer Continental Shelf (OCS) lease area. The analysis considered: the probability of spill occurrences based on historical trends; likely movement of oil slicks based on a climatological model; and locations of environmental resources which could be vulnerable to spilled oil. The times between spill occurrence and contact with resources were estimated to aid analysts in estimating slick characteristics. Critical assumptions made for this particular analysis were (1) that oil exists in the lease area, and (2) that 0.879 billion barrels of oil will be found and produced from tracts sold in sale 76. On the basis of this resource estimate, it was calculated that 3 to 4 oilspills of 1,000 barrels or greater will occur over the 30-year production life of the proposed sale 76 lease tracts. The results also depend upon the routes and methods chosen to transport oil from 0CS platforms to shore. Given the above assumptions, the estimated probability that one or more oilspills of 1,000 barrels or larger will occur and contact land after being at sea less than 30 days is 0.36; for spills 10,000 barrels or larger, the probability is 0.22. These probabilities also reflect the following assumptions: oilspills remain intact for up to 30 days, do not weather, and are not cleaned up. It is noteworthy that over 90 percent of the risk from proposed sale 76 is due to transportation rather than production of oil. In addition, the risks from proposed sale 76 are about 1/10 to 1/15 those of existing tanker transportation of crude oil imports and refined products in the mid-Atlantic area.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual feedback system to reduce errors while operating roof bolting machines
Steiner, Lisa J.; Burgess-Limerick, Robin; Eiter, Brianna; Porter, William; Matty, Tim
2015-01-01
Problem Operators of roof bolting machines in underground coal mines do so in confined spaces and in very close proximity to the moving equipment. Errors in the operation of these machines can have serious consequences, and the design of the equipment interface has a critical role in reducing the probability of such errors. Methods An experiment was conducted to explore coding and directional compatibility on actual roof bolting equipment and to determine the feasibility of a visual feedback system to alert operators of critical movements and to also alert other workers in close proximity to the equipment to the pending movement of the machine. The quantitative results of the study confirmed the potential for both selection errors and direction errors to be made, particularly during training. Results Subjective data confirmed a potential benefit of providing visual feedback of the intended operations and movements of the equipment. Impact This research may influence the design of these and other similar control systems to provide evidence for the use of warning systems to improve operator situational awareness. PMID:23398703
Application of a time probabilistic approach to seismic landslide hazard estimates in Iran
NASA Astrophysics Data System (ADS)
Rajabi, A. M.; Del Gaudio, V.; Capolongo, D.; Khamehchiyan, M.; Mahdavifar, M. R.
2009-04-01
Iran is a country located in a tectonic active belt and is prone to earthquake and related phenomena. In the recent years, several earthquakes caused many fatalities and damages to facilities, e.g. the Manjil (1990), Avaj (2002), Bam (2003) and Firuzabad-e-Kojur (2004) earthquakes. These earthquakes generated many landslides. For instance, catastrophic landslides triggered by the Manjil Earthquake (Ms = 7.7) in 1990 buried the village of Fatalak, killed more than 130 peoples and cut many important road and other lifelines, resulting in major economic disruption. In general, earthquakes in Iran have been concentrated in two major zones with different seismicity characteristics: one is the region of Alborz and Central Iran and the other is the Zagros Orogenic Belt. Understanding where seismically induced landslides are most likely to occur is crucial in reducing property damage and loss of life in future earthquakes. For this purpose a time probabilistic approach for earthquake-induced landslide hazard at regional scale, proposed by Del Gaudio et al. (2003), has been applied to the whole Iranian territory to provide the basis of hazard estimates. This method consists in evaluating the recurrence of seismically induced slope failure conditions inferred from the Newmark's model. First, by adopting Arias Intensity to quantify seismic shaking and using different Arias attenuation relations for Alborz - Central Iran and Zagros regions, well-established methods of seismic hazard assessment, based on the Cornell (1968) method, were employed to obtain the occurrence probabilities for different levels of seismic shaking in a time interval of interest (50 year). Then, following Jibson (1998), empirical formulae specifically developed for Alborz - Central Iran and Zagros, were used to represent, according to the Newmark's model, the relation linking Newmark's displacement Dn to Arias intensity Ia and to slope critical acceleration ac. These formulae were employed to evaluate the slope critical acceleration (Ac)x for which a prefixed probability exists that seismic shaking would result in a Dn value equal to a threshold x whose exceedence would cause landslide triggering. The obtained ac values represent the minimum slope resistance required to keep the probability of seismic-landslide triggering within the prefixed value. In particular we calculated the spatial distribution of (Ac)x for x thresholds of 10 and 2 cm in order to represent triggering conditions for coherent slides (e.g., slumps, block slides, slow earth flows) and disrupted slides (e.g., rock falls, rock slides, rock avalanches), respectively. Then we produced a probabilistic national map that shows the spatial distribution of (Ac)10 and (Ac)2, for a 10% probability of exceedence in 50 year, which is a significant level of hazard equal to that commonly used for building codes. The spatial distribution of the calculated (Ac)xvalues can be compared with the in situ actual ac values of specific slopes to estimate whether these slopes have a significant probability of failing under seismic action in the future. As example of possible application of this kind of time probabilistic map to hazard estimates, we compared the values obtained for the Manjil region with a GIS map providing spatial distribution of estimated ac values in the same region. The spatial distribution of slopes characterized by ac < (Ac)10 was then compared with the spatial distribution of the major landslides of coherent type triggered by the Manjil earthquake. This comparison provides indications on potential, problems and limits of the experimented approach for the study area. References Cornell, C.A., 1968: Engineering seismic risk analysis, Bull. Seism. Soc. Am., 58, 1583-1606. Del Gaudio V., Wasowski J., & Pierri P., 2003: An approach to time probabilistic evaluation of seismically-induced landslide hazard. Bull Seism. Soc. Am., 93, 557-569. Jibson, R.W., E.L. Harp and J.A. Michael, 1998: A method for producing digital probabilistic seismic landslide hazard maps: an example from the Los Angeles, California, area, U.S. Geological Survey Open-File Report 98-113, Golden, Colorado, 17 pp.
Relative Contributions of Three Descriptive Methods: Implications for Behavioral Assessment
ERIC Educational Resources Information Center
Pence, Sacha T.; Roscoe, Eileen M.; Bourret, Jason C.; Ahearn, William H.
2009-01-01
This study compared the outcomes of three descriptive analysis methods--the ABC method, the conditional probability method, and the conditional and background probability method--to each other and to the results obtained from functional analyses. Six individuals who had been diagnosed with developmental delays and exhibited problem behavior…
A Mixed-Method Study Exploring Depression in U.S. Citizen-Children in Mexican Immigrant Families
Gulbas, Lauren E.; Zayas, Luis H.; Yoon, Hyunwoo; Szlyk, Hannah; Aguilar-Gaxiola, Sergio; Natera, Guillermina
2016-01-01
Background There is a critical need to document the mental health effects of immigration policies and practices on children vulnerable to parental deportation. Few studies capture the differential experiences produced by U.S. citizen-children’s encounters with immigration enforcement, much less in ways that analyze mental health outcomes alongside the psychosocial contexts within which those outcomes arise. Methods We explore the psychosocial dimensions of depression in U.S. citizen-children with undocumented Mexican parents to examine differences between citizen-children affected and not affected by parental deportation. An exploratory mixed-method design was used to integrate a quantitative measure of depression symptoms (CDI-2) within qualitative data collected with 48 citizen-children aged 8 to 15 with and without experiences of parental deportation. Results Stressors elicited by citizen-children in the qualitative interview included an inability to communicate with friends, negative perceptions of Mexico, financial struggles, loss of supportive school networks, stressed relation with parent(s), and violence. Fifty percent of citizen-children with probable depression—regardless of experiences with parental deportation—cited “stressed relation with parents,” compared to 9% without depression. In contrast, themes of “loss of supportive school network” and “violence” were mentioned almost exclusively by citizen-children with probable depression and affected by parental deportation. Conclusions While citizen-children who suffer parental deportation experience the most severe consequences associated with immigration enforcement, our findings also suggest that the burden of mental health issues extends to those children concomitantly affected by immigration enforcement policies that target their undocumented parents. PMID:26648588
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
Determination of critical habitat for the endangered Nelson's bighorn sheep in southern California
Turner, J.C.; Douglas, C.L.; Hallum, C.R.; Krausman, P.R.; Ramey, R.R.
2004-01-01
The United States Fish and Wildlife Service's (USFWS) designation of critical habitat for the endangered Nelson's bighorn sheep (Ovis canadensis nelsoni) in the Peninsular Ranges of southern California has been controversial because of an absence of a quantitative, repeatable scientific approach to the designation of critical habitat. We used 12,411 locations of Nelson's bighorn sheep collected from 1984-1998 to evaluate habitat use within 398 km2 of the USFWS-designated critical habitat in the northern Santa Rosa Mountains, Riverside County, California. We developed a multiple logistic regression model to evaluate and predict the probability of bighorn use versus non-use of native landscapes. Habitat predictor variables included elevation, slope, ruggedness, slope aspect, proximity to water, and distance from minimum expanses of escape habitat. We used Earth Resources Data Analysis System Geographic Information System (ERDAS-GIS) software to view, retrieve, and format predictor values for input to the Statistical Analysis Systems (SAS) software. To adequately account for habitat landscape diversity, we carried out an unsupervised classification at the outset of data inquiry using a maximum-likelihood clustering scheme implemented in ERDAS. We used the strata resulting from the unsupervised classification in a stratified random sampling scheme to minimize data loads required for model development. Based on 5 predictor variables, the habitat model correctly classified >96% of observed bighorn sheep locations. Proximity to perennial water was the best predictor variable. Ninety-seven percent of the observations were within 3 km of perennial water. Exercising the model over the northern Santa Rosa Mountain study area provided probabilities of bighorn use at a 30 x 30-m2 pixel level. Within the 398 km 2 of USFWS-designated critical habitat, only 34% had a graded probability of bighorn use to non-use ranging from ???1:1 to 6,044:1. The remaining 66% of the study area had odds of having bighorn use <1:1 or it was more likely not to be used by bighorn sheep. The USFWS designation of critical habitat included areas (45 km2) of importance (2.5 to ???40 observations per km2 per year) to Nelson's bighorn sheep and large landscapes (353 km2) that do not appear to be used (<1 observation per km2 per year).
Estimating Consequences of MMOD Penetrations on ISS
NASA Technical Reports Server (NTRS)
Evans, H.; Hyde, James; Christiansen, E.; Lear, D.
2017-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
Predicting the Consequences of MMOD Penetrations on the International Space Station
NASA Technical Reports Server (NTRS)
Hyde, James; Christiansen, E.; Lear, D.; Evans
2018-01-01
The threat from micrometeoroid and orbital debris (MMOD) impacts on space vehicles is often quantified in terms of the probability of no penetration (PNP). However, for large spacecraft, especially those with multiple compartments, a penetration may have a number of possible outcomes. The extent of the damage (diameter of hole, crack length or penetration depth), the location of the damage relative to critical equipment or crew, crew response, and even the time of day of the penetration are among the many factors that can affect the outcome. For the International Space Station (ISS), a Monte-Carlo style software code called Manned Spacecraft Crew Survivability (MSCSurv) is used to predict the probability of several outcomes of an MMOD penetration-broadly classified as loss of crew (LOC), crew evacuation (Evac), loss of escape vehicle (LEV), and nominal end of mission (NEOM). By generating large numbers of MMOD impacts (typically in the billions) and tracking the consequences, MSCSurv allows for the inclusion of a large number of parameters and models as well as enabling the consideration of uncertainties in the models and parameters. MSCSurv builds upon the results from NASA's Bumper software (which provides the probability of penetration and critical input data to MSCSurv) to allow analysts to estimate the probability of LOC, Evac, LEV, and NEOM. This paper briefly describes the overall methodology used by NASA to quantify LOC, Evac, LEV, and NEOM with particular emphasis on describing in broad terms how MSCSurv works and its capabilities and most significant models.
Surveillance guidelines for disease elimination: A case study of canine rabies
Townsend, Sunny E.; Lembo, Tiziana; Cleaveland, Sarah; Meslin, François X.; Miranda, Mary Elizabeth; Putra, Anak Agung Gde; Haydon, Daniel T.; Hampson, Katie
2013-01-01
Surveillance is a critical component of disease control programmes but is often poorly resourced, particularly in developing countries lacking good infrastructure and especially for zoonoses which require combined veterinary and medical capacity and collaboration. Here we examine how successful control, and ultimately disease elimination, depends on effective surveillance. We estimated that detection probabilities of <0.1 are broadly typical of rabies surveillance in endemic countries and areas without a history of rabies. Using outbreak simulation techniques we investigated how the probability of detection affects outbreak spread, and outcomes of response strategies such as time to control an outbreak, probability of elimination, and the certainty of declaring freedom from disease. Assuming realistically poor surveillance (probability of detection <0.1), we show that proactive mass dog vaccination is much more effective at controlling rabies and no more costly than campaigns that vaccinate in response to case detection. Control through proactive vaccination followed by 2 years of continuous monitoring and vaccination should be sufficient to guarantee elimination from an isolated area not subject to repeat introductions. We recommend that rabies control programmes ought to be able to maintain surveillance levels that detect at least 5% (and ideally 10%) of all cases to improve their prospects of eliminating rabies, and this can be achieved through greater intersectoral collaboration. Our approach illustrates how surveillance is critical for the control and elimination of diseases such as canine rabies and can provide minimum surveillance requirements and technical guidance for elimination programmes under a broad-range of circumstances. PMID:23260376
Mattfeldt, S.D.; Bailey, L.L.; Grant, E.H.C.
2009-01-01
Monitoring programs have the potential to identify population declines and differentiate among the possible cause(s) of these declines. Recent criticisms regarding the design of monitoring programs have highlighted a failure to clearly state objectives and to address detectability and spatial sampling issues. Here, we incorporate these criticisms to design an efficient monitoring program whose goals are to determine environmental factors which influence the current distribution and measure change in distributions over time for a suite of amphibians. In designing the study we (1) specified a priori factors that may relate to occupancy, extinction, and colonization probabilities and (2) used the data collected (incorporating detectability) to address our scientific questions and adjust our sampling protocols. Our results highlight the role of wetland hydroperiod and other local covariates in the probability of amphibian occupancy. There was a change in overall occupancy probabilities for most species over the first three years of monitoring. Most colonization and extinction estimates were constant over time (years) and space (among wetlands), with one notable exception: local extinction probabilities for Rana clamitans were lower for wetlands with longer hydroperiods. We used information from the target system to generate scenarios of population change and gauge the ability of the current sampling to meet monitoring goals. Our results highlight the limitations of the current sampling design, emphasizing the need for long-term efforts, with periodic re-evaluation of the program in a framework that can inform management decisions.
Small, Robert J.; Brost, Brian M.; Hooten, Mevin B.; Castellote, Manuel; Mondragon, Jeffrey
2017-01-01
The population of beluga whales in Cook Inlet, Alaska, USA, declined by nearly half in the mid-1990s, primarily from an unsustainable harvest, and was listed as endangered in 2008. In 2014, abundance was ~340 whales, and the population trend during 1999-2014 was -1.3% yr-1. Cook Inlet beluga whales are particularly vulnerable to anthropogenic impacts, and noise that has the potential to reduce communication and echolocation range considerably has been documented in critical habitat; thus, noise was ranked as a high potential threat in the Cook Inlet beluga Recovery Plan. The current recovery strategy includes research on effects of threats potentially limiting recovery, and thus we examined the potential impact of anthropogenic noise in critical habitat, specifically, spatial displacement. Using a subset of data on anthropogenic noise and beluga detections from a 5 yr acoustic study, we evaluated the influence of noise events on beluga occupancy probability. We used occupancy models, which account for factors that affect detection probability when estimating occupancy, the first application of these models to examine the potential impacts of anthropogenic noise on marine mammal behavior. Results were inconclusive, primarily because beluga detections were relatively infrequent. Even though noise metrics (sound pressure level and noise duration) appeared in high-ranking models as covariates for occupancy probability, the data were insufficient to indicate better predictive ability beyond those models that only included environmental covariates. Future studies that implement protocols designed specifically for beluga occupancy will be most effective for accurately estimating the effect of noise on beluga displacement.
Validation of a temperature prediction model for heat deaths in undocumented border crossers.
Ruttan, Tim; Stolz, Uwe; Jackson-Vance, Sara; Parks, Bruce; Keim, Samuel M
2013-04-01
Heat exposure is a leading cause of death in undocumented border crossers along the Arizona-Mexico border. We performed a validation study of a weather prediction model that predicts the probability of heat related deaths among undocumented border crossers. We analyzed a medical examiner registry cohort of undocumented border crosser heat- related deaths from January 1, 2002 to August 31, 2009 and used logistic regression to model the probability of one or more heat deaths on a given day using daily high temperature (DHT) as the predictor. At a critical threshold DHT of 40 °C, the probability of at least one heat death was 50 %. The probability of a heat death along the Arizona-Mexico border for suspected undocumented border crossers is strongly associated with ambient temperature. These results can be used in prevention and response efforts to assess the daily risk of deaths among undocumented border crossers in the region.
Constructing event trees for volcanic crises
Newhall, C.; Hoblitt, R.
2002-01-01
Event trees are useful frameworks for discussing probabilities of possible outcomes of volcanic unrest. Each branch of the tree leads from a necessary prior event to a more specific outcome, e.g., from an eruption to a pyroclastic flow. Where volcanic processes are poorly understood, probability estimates might be purely empirical - utilizing observations of past and current activity and an assumption that the future will mimic the past or follow a present trend. If processes are better understood, probabilities might be estimated from a theoritical model, either subjectively or by numerical simulations. Use of Bayes' theorem aids in the estimation of how fresh unrest raises (or lowers) the probabilities of eruptions. Use of event trees during volcanic crises can help volcanologists to critically review their analysis of hazard, and help officials and individuals to compare volcanic risks with more familiar risks. Trees also emphasize the inherently probabilistic nature of volcano forecasts, with multiple possible outcomes.
Computing Earthquake Probabilities on Global Scales
NASA Astrophysics Data System (ADS)
Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.
2016-03-01
Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
2016-12-01
masses collide, they form a supercritical mass . Criticality refers to the neutron population within the system. A critical system is one that can...Spectrometry, no. 242, pp. 161–168, 2005. [9] S. Raeder, “Trace analysis of actinides in the environment by means of resonance ionization mass ...first ionization potential of actinide elements by resonance ionization mass spectrometry.” Spectrochimica Acta part B: Atomic Spectroscopy. vol. 52
1990-05-22
understood. In the end, James uses all of these characters to draw a distinction between stereotypical Americans and Europeans. One critic, James Tuttleton...lover in the ruins by moonlight , she would probably never have caught malaria. She falls critically ill because of the malaria and ultimately dies...America’s political dominance was established, ,.ry few people could question its cultural dominance. Today, Hollywood films are shown in every
WATER TREATMENT AND EDUCATION IN VILLAHERMOSA, MEXICO
The interdisciplinary P3 team included mechanical, civil and environmental, and electrical engineers. This was critical to overcome the technical challenges presented by this project. A successful full-scale system was developed that greatly reduces the probability of contract...
Identifying mechanistic indicators of childhood asthma from blood gene expression
Asthmatic individuals have been identified as a susceptible subpopulation for air pollutants. However, asthma represents a syndrome with multiple probable etiologies, and the identification of these asthma endotypes is critical to accurately define the most susceptible subpopula...
Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift
Zhao, Lei; Yue, Xingye; Waxman, David
2013-01-01
A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318
Grossling, Bernardo F.
1975-01-01
Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then be viewed as phenomena subject to statistical uncertainty and responsive to changes in economic and technologic factors.
Liu, Xuewu; Huang, Yuxiao; Liang, Jiao; Zhang, Shuai; Li, Yinghui; Wang, Jun; Shen, Yan; Xu, Zhikai; Zhao, Ya
2014-11-30
The invasion of red blood cells (RBCs) by malarial parasites is an essential step in the life cycle of Plasmodium falciparum. Human-parasite surface protein interactions play a critical role in this process. Although several interactions between human and parasite proteins have been discovered, the mechanism related to invasion remains poorly understood because numerous human-parasite protein interactions have not yet been identified. High-throughput screening experiments are not feasible for malarial parasites due to difficulty in expressing the parasite proteins. Here, we performed computational prediction of the PPIs involved in malaria parasite invasion to elucidate the mechanism by which invasion occurs. In this study, an expectation maximization algorithm was used to estimate the probabilities of domain-domain interactions (DDIs). Estimates of DDI probabilities were then used to infer PPI probabilities. We found that our prediction performance was better than that based on the information of D. melanogaster alone when information related to the six species was used. Prediction performance was assessed using protein interaction data from S. cerevisiae, indicating that the predicted results were reliable. We then used the estimates of DDI probabilities to infer interactions between 490 parasite and 3,787 human membrane proteins. A small-scale dataset was used to illustrate the usability of our method in predicting interactions between human and parasite proteins. The positive predictive value (PPV) was lower than that observed in S. cerevisiae. We integrated gene expression data to improve prediction accuracy and to reduce false positives. We identified 80 membrane proteins highly expressed in the schizont stage by fast Fourier transform method. Approximately 221 erythrocyte membrane proteins were identified using published mass spectral datasets. A network consisting of 205 interactions was predicted. Results of network analysis suggest that SNARE proteins of parasites and APP of humans may function in the invasion of RBCs by parasites. We predicted a small-scale PPI network that may be involved in parasite invasion of RBCs by integrating DDI information and expression profiles. Experimental studies should be conducted to validate the predicted interactions. The predicted PPIs help elucidate the mechanism of parasite invasion and provide directions for future experimental investigations.
The one-dimensional minesweeper game: What are your chances of winning?
NASA Astrophysics Data System (ADS)
Rodríguez-Achach, M.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.; Huerta-Quintanilla, R.; Canto-Lugo, E.
2016-04-01
Minesweeper is a famous computer game consisting usually in a two-dimensional lattice, where cells can be empty or mined and gamers are required to locate the mines without dying. Even if minesweeper seems to be a very simple system, it has some complex and interesting properties as NP-completeness. In this paper and for the one-dimensional case, given a lattice of n cells and m mines, we calculate the winning probability. By numerical simulations this probability is also estimated. We also find out by mean of these simulations that there exists a critical density of mines that minimize the probability of winning the game. Analytical results and simulations are compared showing a very good agreement.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi
2015-11-01
We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).
NASA Astrophysics Data System (ADS)
Sergeenko, N. P.
2017-11-01
An adequate statistical method should be developed in order to predict probabilistically the range of ionospheric parameters. This problem is solved in this paper. The time series of the critical frequency of the layer F2- foF2( t) were subjected to statistical processing. For the obtained samples {δ foF2}, statistical distributions and invariants up to the fourth order are calculated. The analysis shows that the distributions differ from the Gaussian law during the disturbances. At levels of sufficiently small probability distributions, there are arbitrarily large deviations from the model of the normal process. Therefore, it is attempted to describe statistical samples {δ foF2} based on the Poisson model. For the studied samples, the exponential characteristic function is selected under the assumption that time series are a superposition of some deterministic and random processes. Using the Fourier transform, the characteristic function is transformed into a nonholomorphic excessive-asymmetric probability-density function. The statistical distributions of the samples {δ foF2} calculated for the disturbed periods are compared with the obtained model distribution function. According to the Kolmogorov's criterion, the probabilities of the coincidence of a posteriori distributions with the theoretical ones are P 0.7-0.9. The conducted analysis makes it possible to draw a conclusion about the applicability of a model based on the Poisson random process for the statistical description and probabilistic variation estimates during heliogeophysical disturbances of the variations {δ foF2}.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-01-01
Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178
Strengthening Connections between Dendrohydrology and Water Management in the Mediterranean Basin
NASA Astrophysics Data System (ADS)
Touchan, R.; Freitas, R. J.
2017-12-01
Dendrochronology can provide the knowledge upon which to base sound decisions for water resources. In general, water managers are limited to using short continuous instrumental records for forecasting streamflows and reservoir levels. Longer hydrological records are required. Proxy data such as annual tree-ring growth provide us with knowledge of the past frequency and severity of climatic anomalies, such as drought and wet periods, and can be used to improve probability calculations of future events. By improving probability input to these plans, water managers can use this information for water allocations, water conservation measures, and water efficiency methods. Accurate planning is critical in water deficit regions with histories of conflict over land and limited water. Here, we link the science of dendrohydrology with water management, and identify appropriate forums for scientists, policy decision makers, and water managers to collaborate in translating science into effective actions anticipating extreme events, such drought or floods. We will present examples of several dendrohydrological reconstructions from the eastern Mediterranean and North Africa as input for water management plans. Different disciplines are needed to work together, and we identify possible mechanisms to collaborate in order to reach this crucial necessity to use scarce water wisely.
NASA Astrophysics Data System (ADS)
Yan, Ying; Zhang, Shen; Tang, Jinjun; Wang, Xiaofei
2017-07-01
Discovering dynamic characteristics in traffic flow is the significant step to design effective traffic managing and controlling strategy for relieving traffic congestion in urban cities. A new method based on complex network theory is proposed to study multivariate traffic flow time series. The data were collected from loop detectors on freeway during a year. In order to construct complex network from original traffic flow, a weighted Froenius norm is adopt to estimate similarity between multivariate time series, and Principal Component Analysis is implemented to determine the weights. We discuss how to select optimal critical threshold for networks at different hour in term of cumulative probability distribution of degree. Furthermore, two statistical properties of networks: normalized network structure entropy and cumulative probability of degree, are utilized to explore hourly variation in traffic flow. The results demonstrate these two statistical quantities express similar pattern to traffic flow parameters with morning and evening peak hours. Accordingly, we detect three traffic states: trough, peak and transitional hours, according to the correlation between two aforementioned properties. The classifying results of states can actually represent hourly fluctuation in traffic flow by analyzing annual average hourly values of traffic volume, occupancy and speed in corresponding hours.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
Saichev, A; Sornette, D
2005-05-01
Using the epidemic-type aftershock sequence (ETAS) branching model of triggered seismicity, we apply the formalism of generating probability functions to calculate exactly the average difference between the magnitude of a mainshock and the magnitude of its largest aftershock over all generations. This average magnitude difference is found empirically to be independent of the mainshock magnitude and equal to 1.2, a universal behavior known as Båth's law. Our theory shows that Båth's law holds only sufficiently close to the critical regime of the ETAS branching process. Allowing for error bars +/- 0.1 for Båth's constant value around 1.2, our exact analytical treatment of Båth's law provides new constraints on the productivity exponent alpha and the branching ratio n: 0.9 approximately < alpha < or =1. We propose a method for measuring alpha based on the predicted renormalization of the Gutenberg-Richter distribution of the magnitudes of the largest aftershock. We also introduce the "second Båth law for foreshocks:" the probability that a main earthquake turns out to be the foreshock does not depend on its magnitude rho.
NASA Astrophysics Data System (ADS)
Akram, Muhammad Farooq Bin
The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Orlandini, S; Pasquini, B; Stocchero, M; Pinzauti, S; Furlanetto, S
2014-04-25
The development of a capillary electrophoresis (CE) method for the assay of almotriptan (ALM) and its main impurities using an integrated Quality by Design and mixture-process variable (MPV) approach is described. A scouting phase was initially carried out by evaluating different CE operative modes, including the addition of pseudostationary phases and additives to the background electrolyte, in order to approach the analytical target profile. This step made it possible to select normal polarity microemulsion electrokinetic chromatography (MEEKC) as operative mode, which allowed a good selectivity to be achieved in a low analysis time. On the basis of a general Ishikawa diagram for MEEKC methods, a screening asymmetric matrix was applied in order to screen the effects of the process variables (PVs) voltage, temperature, buffer concentration and buffer pH, on critical quality attributes (CQAs), represented by critical separation values and analysis time. A response surface study was then carried out considering all the critical process parameters, including both the PVs and the mixture components (MCs) of the microemulsion (borate buffer, n-heptane as oil, sodium dodecyl sulphate/n-butanol as surfactant/cosurfactant). The values of PVs and MCs were simultaneously changed in a MPV study, making it possible to find significant interaction effects. The design space (DS) was defined as the multidimensional combination of PVs and MCs where the probability for the different considered CQAs to be acceptable was higher than a quality level π=90%. DS was identified by risk of failure maps, which were drawn on the basis of Monte-Carlo simulations, and verification points spanning the design space were tested. Robustness testing of the method, performed by a D-optimal design, and system suitability criteria allowed a control strategy to be designed. The optimized method was validated following ICH Guideline Q2(R1) and was applied to a real sample of ALM coated tablets. Copyright © 2014 Elsevier B.V. All rights reserved.
The relationship between species detection probability and local extinction probability
Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.
2004-01-01
In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.
Cluster membership probability: polarimetric approach
NASA Astrophysics Data System (ADS)
Medhi, Biman J.; Tamura, Motohide
2013-04-01
Interstellar polarimetric data of the six open clusters Hogg 15, NGC 6611, NGC 5606, NGC 6231, NGC 5749 and NGC 6250 have been used to estimate the membership probability for the stars within them. For proper-motion member stars, the membership probability estimated using the polarimetric data is in good agreement with the proper-motion cluster membership probability. However, for proper-motion non-member stars, the membership probability estimated by the polarimetric method is in total disagreement with the proper-motion cluster membership probability. The inconsistencies in the determined memberships may be because of the fundamental differences between the two methods of determination: one is based on stellar proper motion in space and the other is based on selective extinction of the stellar output by the asymmetric aligned dust grains present in the interstellar medium. The results and analysis suggest that the scatter of the Stokes vectors q (per cent) and u (per cent) for the proper-motion member stars depends on the interstellar and intracluster differential reddening in the open cluster. It is found that this method could be used to estimate the cluster membership probability if we have additional polarimetric and photometric information for a star to identify it as a probable member/non-member of a particular cluster, such as the maximum wavelength value (λmax), the unit weight error of the fit (σ1), the dispersion in the polarimetric position angles (overline{ɛ }), reddening (E(B - V)) or the differential intracluster reddening (ΔE(B - V)). This method could also be used to estimate the membership probability of known member stars having no membership probability as well as to resolve disagreements about membership among different proper-motion surveys.
Ballhausen, Hendrik; Belka, Claus
2017-03-01
To provide a rule for the agreement or disagreement of the Poisson approximation (PA) and the Zaider-Minerbo formula (ZM) on the ranking of treatment alternatives in terms of tumor control probability (TCP) in the linear quadratic model. A general criterion involving a critical cell birth rate was formally derived. For demonstration, the criterion was applied to a distinct radiobiological model of fast growing head and neck tumors and a respective range of 22 conventional and nonconventional head and neck schedules. There is a critical cell birth rate b crit below which PA and ZM agree on which one out of two alternative treatment schemes with single-cell survival curves S'(t) and S''(t) offers better TCP: [Formula: see text] For cell birth rates b above this critical cell birth rate, PA and ZM disagree if and only if b >b crit > 0. In case of the exemplary head and neck schedules, out of 231 possible combinations, only 16 or 7% were found where PA and ZM disagreed. In all 231 cases the prediction of the criterion was numerically confirmed, and cell birth rates at crossovers between schedules matched the calculated critical cell birth rates. TCP estimated by PA and ZM almost never numerically coincide. Still, in many cases both formulas at least agree about which one out of two alternative fractionation schemes offers better TCP. In case of fast growing tumors featuring a high cell birth rate, however, ZM may suggest a re-evaluation of treatment options.
Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere
2011-01-01
Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saloman, Edward B.; Kramida, Alexander
2017-08-01
The energy levels, observed spectral lines, and transition probabilities of the neutral vanadium atom, V i, have been compiled. Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factors and leading percentage compositions for the levels are included where available, as well as wavelengths calculated from the energy levels (Ritz wavelengths). Wavelengths are reported for 3985 transitions, and 549 energy levels are determined. The observed relative intensities normalized to a common scale are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Victoria; Kishan, Amar U.; Cao, Minsong
2014-03-15
Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Basedmore » on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose has been demonstrated. Bimodal behavior was observed in the dose distribution of lung injury after SBRT. Novel statistical and geometrical analysis has shown that the systematically quantified low-dose peak at approximately 35 Gy, or 70% prescription dose, is a good indication of a critical dose for injury. The determined critical dose of 35 Gy resembles the critical dose volume limit of 30 Gy for ipsilateral bronchus in RTOG 0618 and results from previous studies. The authors seek to further extend this improved analysis method to a larger cohort to better understand the interpatient variation in radiographic lung injury dose response post-SBRT.« less
A Solution Space for a System of Null-State Partial Differential Equations: Part 4
NASA Astrophysics Data System (ADS)
Flores, Steven M.; Kleban, Peter
2015-01-01
This article is the last of four that completely and rigorously characterize a solution space for a homogeneous system of 2 N + 3 linear partial differential equations in 2 N variables that arises in conformal field theory (CFT) and multiple Schramm-Löwner evolution (SLE). The system comprises 2 N null-state equations and three conformal Ward identities that govern CFT correlation functions of 2 N one-leg boundary operators. In the first two articles (Flores and Kleban in Commun Math Phys, 2012; Flores and Kleban, in Commun Math Phys, 2014), we use methods of analysis and linear algebra to prove that dim , with C N the Nth Catalan number. Using these results in the third article (Flores and Kleban, in Commun Math Phys, 2013), we prove that dim and is spanned by (real-valued) solutions constructed with the Coulomb gas (contour integral) formalism of CFT. In this article, we use these results to prove some facts concerning the solution space . First, we show that each of its elements equals a sum of at most two distinct Frobenius series in powers of the difference between two adjacent points (unless is odd, in which case a logarithmic term may appear). This establishes an important element in the operator product expansion for one-leg boundary operators, assumed in CFT. We also identify particular elements of , which we call connectivity weights, and exploit their special properties to conjecture a formula for the probability that the curves of a multiple-SLE process join in a particular connectivity. This leads to new formulas for crossing probabilities of critical lattice models inside polygons with a free/fixed side-alternating boundary condition, which we derive in Flores et al. (Partition functions and crossing probabilities for critical systems inside polygons, in preparation). Finally, we propose a reason for why the exceptional speeds [certain values that appeared in the analysis of the Coulomb gas solutions in Flores and Kleban (Commun Math Phys, 2013)] and the minimal models of CFT are connected.
Theory of Aircraft Collision-Avoidance System Design and Evaluation
DOT National Transportation Integrated Search
1971-05-01
The problem of aircraft anti-collision system design and evaluation is discussed in this work. Two evaluation criteria, conflict ratio and probability of missed critical alarm are formulated and are found to be independent of both traffic density and...
A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves
NASA Astrophysics Data System (ADS)
Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang
2018-03-01
The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.
Nonprobability and probability-based sampling strategies in sexual science.
Catania, Joseph A; Dolcini, M Margaret; Orellana, Roberto; Narayanan, Vasudah
2015-01-01
With few exceptions, much of sexual science builds upon data from opportunistic nonprobability samples of limited generalizability. Although probability-based studies are considered the gold standard in terms of generalizability, they are costly to apply to many of the hard-to-reach populations of interest to sexologists. The present article discusses recent conclusions by sampling experts that have relevance to sexual science that advocates for nonprobability methods. In this regard, we provide an overview of Internet sampling as a useful, cost-efficient, nonprobability sampling method of value to sex researchers conducting modeling work or clinical trials. We also argue that probability-based sampling methods may be more readily applied in sex research with hard-to-reach populations than is typically thought. In this context, we provide three case studies that utilize qualitative and quantitative techniques directed at reducing limitations in applying probability-based sampling to hard-to-reach populations: indigenous Peruvians, African American youth, and urban men who have sex with men (MSM). Recommendations are made with regard to presampling studies, adaptive and disproportionate sampling methods, and strategies that may be utilized in evaluating nonprobability and probability-based sampling methods.
[Optimize dropping process of Ginkgo biloba dropping pills by using design space approach].
Shen, Ji-Chen; Wang, Qing-Qing; Chen, An; Pan, Fang-Lai; Gong, Xing-Chu; Qu, Hai-Bin
2017-07-01
In this paper, a design space approach was applied to optimize the dropping process of Ginkgo biloba dropping pills. Firstly, potential critical process parameters and potential process critical quality attributes were determined through literature research and pre-experiments. Secondly, experiments were carried out according to Box-Behnken design. Then the critical process parameters and critical quality attributes were determined based on the experimental results. Thirdly, second-order polynomial models were used to describe the quantitative relationships between critical process parameters and critical quality attributes. Finally, a probability-based design space was calculated and verified. The verification results showed that efficient production of Ginkgo biloba dropping pills can be guaranteed by operating within the design space parameters. The recommended operation ranges for the critical dropping process parameters of Ginkgo biloba dropping pills were as follows: dropping distance of 5.5-6.7 cm, and dropping speed of 59-60 drops per minute, providing a reference for industrial production of Ginkgo biloba dropping pills. Copyright© by the Chinese Pharmaceutical Association.
Chen, Shyi-Ming; Chen, Shen-Wen
2015-03-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.
Rainfall frequency analysis for ungauged sites using satellite precipitation products
NASA Astrophysics Data System (ADS)
Gado, Tamer A.; Hsu, Kuolin; Sorooshian, Soroosh
2017-11-01
The occurrence of extreme rainfall events and their impacts on hydrologic systems and society are critical considerations in the design and management of a large number of water resources projects. As precipitation records are often limited or unavailable at many sites, it is essential to develop better methods for regional estimation of extreme rainfall at these partially-gauged or ungauged sites. In this study, an innovative method for regional rainfall frequency analysis for ungauged sites is presented. The new method (hereafter, this is called the RRFA-S) is based on corrected annual maximum series obtained from a satellite precipitation product (e.g., PERSIANN-CDR). The probability matching method (PMM) is used here for bias correction to match the CDF of satellite-based precipitation data with the gauged data. The RRFA-S method was assessed through a comparative study with the traditional index flood method using the available annual maximum series of daily rainfall in two different regions in USA (11 sites in Colorado and 18 sites in California). The leave-one-out cross-validation technique was used to represent the ungauged site condition. Results of this numerical application have found that the quantile estimates obtained from the new approach are more accurate and more robust than those given by the traditional index flood method.
Ensemble of trees approaches to risk adjustment for evaluating a hospital's performance.
Liu, Yang; Traskin, Mikhail; Lorch, Scott A; George, Edward I; Small, Dylan
2015-03-01
A commonly used method for evaluating a hospital's performance on an outcome is to compare the hospital's observed outcome rate to the hospital's expected outcome rate given its patient (case) mix and service. The process of calculating the hospital's expected outcome rate given its patient mix and service is called risk adjustment (Iezzoni 1997). Risk adjustment is critical for accurately evaluating and comparing hospitals' performances since we would not want to unfairly penalize a hospital just because it treats sicker patients. The key to risk adjustment is accurately estimating the probability of an Outcome given patient characteristics. For cases with binary outcomes, the method that is commonly used in risk adjustment is logistic regression. In this paper, we consider ensemble of trees methods as alternatives for risk adjustment, including random forests and Bayesian additive regression trees (BART). Both random forests and BART are modern machine learning methods that have been shown recently to have excellent performance for prediction of outcomes in many settings. We apply these methods to carry out risk adjustment for the performance of neonatal intensive care units (NICU). We show that these ensemble of trees methods outperform logistic regression in predicting mortality among babies treated in NICU, and provide a superior method of risk adjustment compared to logistic regression.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Probability and possibility-based representations of uncertainty in fault tree analysis.
Flage, Roger; Baraldi, Piero; Zio, Enrico; Aven, Terje
2013-01-01
Expert knowledge is an important source of input to risk analysis. In practice, experts might be reluctant to characterize their knowledge and the related (epistemic) uncertainty using precise probabilities. The theory of possibility allows for imprecision in probability assignments. The associated possibilistic representation of epistemic uncertainty can be combined with, and transformed into, a probabilistic representation; in this article, we show this with reference to a simple fault tree analysis. We apply an integrated (hybrid) probabilistic-possibilistic computational framework for the joint propagation of the epistemic uncertainty on the values of the (limiting relative frequency) probabilities of the basic events of the fault tree, and we use possibility-probability (probability-possibility) transformations for propagating the epistemic uncertainty within purely probabilistic and possibilistic settings. The results of the different approaches (hybrid, probabilistic, and possibilistic) are compared with respect to the representation of uncertainty about the top event (limiting relative frequency) probability. Both the rationale underpinning the approaches and the computational efforts they require are critically examined. We conclude that the approaches relevant in a given setting depend on the purpose of the risk analysis, and that further research is required to make the possibilistic approaches operational in a risk analysis context. © 2012 Society for Risk Analysis.
Nonlinear Demodulation and Channel Coding in EBPSK Scheme
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding. PMID:23213281
Nonlinear demodulation and channel coding in EBPSK scheme.
Chen, Xianqing; Wu, Lenan
2012-01-01
The extended binary phase shift keying (EBPSK) is an efficient modulation technique, and a special impacting filter (SIF) is used in its demodulator to improve the bit error rate (BER) performance. However, the conventional threshold decision cannot achieve the optimum performance, and the SIF brings more difficulty in obtaining the posterior probability for LDPC decoding. In this paper, we concentrate not only on reducing the BER of demodulation, but also on providing accurate posterior probability estimates (PPEs). A new approach for the nonlinear demodulation based on the support vector machine (SVM) classifier is introduced. The SVM method which selects only a few sampling points from the filter output was used for getting PPEs. The simulation results show that the accurate posterior probability can be obtained with this method and the BER performance can be improved significantly by applying LDPC codes. Moreover, we analyzed the effect of getting the posterior probability with different methods and different sampling rates. We show that there are more advantages of the SVM method under bad condition and it is less sensitive to the sampling rate than other methods. Thus, SVM is an effective method for EBPSK demodulation and getting posterior probability for LDPC decoding.
Temporal Trends in the Use of Parenteral Nutrition in Critically Ill Patients
Kahn, Jeremy M.; Wunsch, Hannah
2014-01-01
Background: Clinical practice guidelines recommend enteral over parenteral nutrition in critical illness and do not recommend early initiation. Few data are available on parenteral nutrition use or timing of initiation in the ICU or how this use may have changed over time. Methods: We used the Project IMPACT database to evaluate temporal trends in parenteral nutrition use (total and partial parenteral nutrition and lipid supplementation) and timing of initiation in adult ICU admissions from 2001 to 2008. We used χ2 tests and analysis of variance to examine characteristics of patients receiving parenteral nutrition and multilevel multivariate logistic regression models to assess parenteral nutrition use over time, in all patients and in specific subgroups. Results: Of 337,442 patients, 20,913 (6.2%) received parenteral nutrition. Adjusting for patient characteristics, the use of parenteral nutrition decreased modestly over time (adjusted probability, 7.2% in 2001-2002 vs 5.5% in 2007-2008, P < .001). Enteral nutrition use increased simultaneously (adjusted probability, 11.5% in 2001-2002 vs 15.3% in 2007-2008, P < .001). Use of parenteral nutrition declined most rapidly in emergent surgical patients, patients with moderate illness severity, patients in the surgical ICU, and patients admitted to an academic facility (P ≤ .01 for all interactions with year). When used, parenteral nutrition was initiated a median of 2 days (interquartile range, 1-3), after ICU admission and > 90% of patients had parenteral nutrition initiated within 7 days; timing of initiation of parenteral nutrition did not change from 2001 to 2008. Conclusions: Use of parenteral nutrition in US ICUs declined from 2001 through 2008 in all patients and in all examined subgroups, with the majority of parenteral nutrition initiated within the first 7 days in ICU; enteral nutrition use coincidently increased over the same time period. PMID:24233390
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S
2008-04-11
A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.
Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin
2014-03-01
To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.
Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; He, Fei; Ma, Chris Y. T.
In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less
Atomistic modelling of magnetic nano-granular thin films
NASA Astrophysics Data System (ADS)
Agudelo-Giraldo, J. D.; Arbeláez-Echeverry, O. D.; Restrepo-Parra, E.
2018-03-01
In this work, a complete model for studying the magnetic behaviour of polycrystalline thin films at nanoscale was processed. This model includes terms as exchange interaction, dipolar interaction and various types of anisotropies. For the first term, exchange interaction dependence of the distance n was used with purpose of quantify the interaction, mainly in grain boundaries. The third term includes crystalline, surface and boundary anisotropies. Special attention was paid to the disorder vector that determines the loss of cubic symmetry in the crystalline structure. For the case of the dipolar interaction, a similar implementation of the fast multiple method (FMM) was performed. Using these tools, modelling and simulations were developed varying the number of grains, and the results obtained presented a great dependence of the magnetic properties on this parameter. Comparisons between critical temperature and magnetization of saturation depending on the number of grains were performed for samples with and without factors as the surface and boundary anisotropies, and the dipolar interaction. It was observed that the inclusion of these parameters produced a decrease in the critical temperature and the magnetization of saturation; furthermore, in both cases, including and not including the disorder parameters, not only the critical temperature, but also the magnetization of saturation exhibited a range of values that also depend on the number of grains. This presence of a critical interval is due to each grain can transit toward the ferromagnetic state at different values of critical temperature. The processes of Zero field cooling (ZFC), Field cooling (FCC) and field cooling in warming mode (FCW) were necessary for understanding the mono-domain regime around of transition temperature, due to the high probabilities of a Super-paramagnetic (SPM) state.
Antiferromagnetic Potts Model on the Erdős-Rényi Random Graph
NASA Astrophysics Data System (ADS)
Contucci, Pierluigi; Dommers, Sander; Giardinà, Cristian; Starr, Shannon
2013-10-01
We study the antiferromagnetic Potts model on the Poissonian Erdős-Rényi random graph. By identifying a suitable interpolation structure and an extended variational principle, together with a positive temperature second-moment analysis we prove the existence of a phase transition at a positive critical temperature. Upper and lower bounds on the temperature critical value are obtained from the stability analysis of the replica symmetric solution (recovered in the framework of Derrida-Ruelle probability cascades) and from an entropy positivity argument.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Importance of small-degree nodes in assortative networks with degree-weight correlations
NASA Astrophysics Data System (ADS)
Ma, Sijuan; Feng, Ling; Monterola, Christopher Pineda; Lai, Choy Heng
2017-10-01
It has been known that assortative network structure plays an important role in spreading dynamics for unweighted networks. Yet its influence on weighted networks is not clear, in particular when weight is strongly correlated with the degrees of the nodes as we empirically observed in Twitter. Here we use the self-consistent probability method and revised nonperturbative heterogenous mean-field theory method to investigate this influence on both susceptible-infective-recovered (SIR) and susceptible-infective-susceptible (SIS) spreading dynamics. Both our simulation and theoretical results show that while the critical threshold is not significantly influenced by the assortativity, the prevalence in the supercritical regime shows a crossover under different degree-weight correlations. In particular, unlike the case of random mixing networks, in assortative networks, the negative degree-weight correlation leads to higher prevalence in their spreading beyond the critical transmissivity than that of the positively correlated. In addition, the previously observed inhibition effect on spreading velocity by assortative structure is not apparent in negatively degree-weight correlated networks, while it is enhanced for that of the positively correlated. Detailed investigation into the degree distribution of the infected nodes reveals that small-degree nodes play essential roles in the supercritical phase of both SIR and SIS spreadings. Our results have direct implications in understanding viral information spreading over online social networks and epidemic spreading over contact networks.
Fluorescence-based visualization of autophagic activity predicts mouse embryo viability
NASA Astrophysics Data System (ADS)
Tsukamoto, Satoshi; Hara, Taichi; Yamamoto, Atsushi; Kito, Seiji; Minami, Naojiro; Kubota, Toshiro; Sato, Ken; Kokubo, Toshiaki
2014-03-01
Embryo quality is a critical parameter in assisted reproductive technologies. Although embryo quality can be evaluated morphologically, embryo morphology does not correlate perfectly with embryo viability. To improve this, it is important to understand which molecular mechanisms are involved in embryo quality control. Autophagy is an evolutionarily conserved catabolic process in which cytoplasmic materials sequestered by autophagosomes are degraded in lysosomes. We previously demonstrated that autophagy is highly activated after fertilization and is essential for further embryonic development. Here, we developed a simple fluorescence-based method for visualizing autophagic activity in live mouse embryos. Our method is based on imaging of the fluorescence intensity of GFP-LC3, a versatile marker for autophagy, which is microinjected into the embryos. Using this method, we show that embryonic autophagic activity declines with advancing maternal age, probably due to a decline in the activity of lysosomal hydrolases. We also demonstrate that embryonic autophagic activity is associated with the developmental viability of the embryo. Our results suggest that embryonic autophagic activity can be utilized as a novel indicator of embryo quality.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
NASA Astrophysics Data System (ADS)
Nelson, Adam
Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.
Ryoo, Sung Weon; Park, Young Kil; Park, Sue-Nie; Shim, Young Soo; Liew, Hyunjeong; Kang, Seongman; Bai, Gill-Han
2007-06-01
In Korea, the Mycobacterium tuberculosis K-strain is the most prevalent clinical isolates and belongs to the Beijing family. In this study, we conducted comparative porteomics of expressed proteins of clinical isolates of the K-strain with H37Rv, H37Ra as well as the vaccine strain of Mycobacterium bovis BCG following phagocytosis by the human monocytic cell line U-937. Proteins were analyzed by 2-D PAGE and MALDITOF-MS. Two proteins, Mb1363 (probable glycogen phosphorylase GlgP) and MT2656 (Haloalkane dehalogenase LinB) were most abundant after phagocytosis of M. tuberculosis K-strain. This approach provides a method to determine specific proteins that may have critical roles in tuberculosis pathogenesis.
Numerical algebraic geometry for model selection and its application to the life sciences
Gross, Elizabeth; Davis, Brent; Ho, Kenneth L.; Bates, Daniel J.
2016-01-01
Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology. PMID:27733697
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
Precursors of chicken flavor. II. Identification of key flavor precursors using sensory methods.
Aliani, Michel; Farmer, Linda J
2005-08-10
Sensory evaluation was used to identify flavor precursors that are critical for flavor development in cooked chicken. Among the potential flavor precursors studied (thiamin, inosine 5'-monophosphate, ribose, ribose-5-phosphate, glucose, and glucose-6-phosphate), ribose appears most important for chicken aroma. An elevated concentration (added or natural) of only 2-4-fold the natural concentration gives an increase in the selected aroma and flavor attributes of cooked chicken meat. Assessment of the volatile odor compounds by gas chromatography-odor assessment and gas chromatography-mass spectrometry showed that ribose increased odors described as "roasted" and "chicken" and that the changes in odor due to additional ribose are probably caused by elevated concentrations of compounds such as 2-furanmethanethiol, 2-methyl-3-furanthiol, and 3-methylthiopropanal.
Visualizing Big Data Outliers through Distributed Aggregation.
Wilkinson, Leland
2017-08-29
Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.
Nompari, Luca; Orlandini, Serena; Pasquini, Benedetta; Campa, Cristiana; Rovini, Michele; Del Bubba, Massimo; Furlanetto, Sandra
2018-02-01
Bexsero is the first approved vaccine for active immunization of individuals from 2 months of age and older to prevent invasive disease caused by Neisseria meningitidis serogroup B. The active components of the vaccine are Neisseria Heparin Binding Antigen, factor H binding protein, Neisseria adhesin A, produced in Escherichia coli cells by recombinant DNA technology, and Outer Membrane Vesicles (expressing Porin A and Porin B), produced by fermentation of Neisseria meningitidis strain NZ98/254. All the Bexsero active components are adsorbed on aluminum hydroxide and the unadsorbed antigens content is a product critical quality attribute. In this paper the development of a fast, selective and sensitive ultra-high-performance liquid chromatography (UHPLC) method for the determination of the Bexsero antigens in the vaccine supernatant is presented. For the first time in the literature, the Quality by Design (QbD) principles were applied to the development of an analytical method aimed to the quality control of a vaccine product. The UHPLC method was fully developed within the QbD framework, the new paradigm of quality outlined in International Conference on Harmonisation guidelines. Critical method attributes (CMAs) were identified with the capacity factor of Neisseria Heparin Binding Antigen, antigens resolution and peak areas. After a scouting phase, aimed at selecting a suitable and fast UHPLC operative mode for the vaccine antigens separation, risk assessment tools were employed to define the critical method parameters to be considered in the screening phase. Screening designs were applied for investigating at first the effects of vial type and sample concentration, and then the effects of injection volume, column type, organic phase starting concentration, ramp time and temperature. Response Surface Methodology pointed out the presence of several significant interaction effects, and with the support of Monte-Carlo simulations led to map out the design space, at a selected probability level, for the desired CMAs. The selected working conditions gave a complete separation of the antigens in about 5min. Robustness testing was carried out by a multivariate approach and a control strategy was implemented by defining system suitability tests. The method was qualified for the analysis of the Bexsero vaccine. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Game-Theoretic strategies for systems of components using product-form utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.
Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less
Probabilistic Sizing and Verification of Space Ceramic Structures
NASA Astrophysics Data System (ADS)
Denaux, David; Ballhause, Dirk; Logut, Daniel; Lucarelli, Stefano; Coe, Graham; Laine, Benoit
2012-07-01
Sizing of ceramic parts is best optimised using a probabilistic approach which takes into account the preexisting flaw distribution in the ceramic part to compute a probability of failure of the part depending on the applied load, instead of a maximum allowable load as for a metallic part. This requires extensive knowledge of the material itself but also an accurate control of the manufacturing process. In the end, risk reduction approaches such as proof testing may be used to lower the final probability of failure of the part. Sizing and verification of ceramic space structures have been performed by Astrium for more than 15 years, both with Zerodur and SiC: Silex telescope structure, Seviri primary mirror, Herschel telescope, Formosat-2 instrument, and other ceramic structures flying today. Throughout this period of time, Astrium has investigated and developed experimental ceramic analysis tools based on the Weibull probabilistic approach. In the scope of the ESA/ESTEC study: “Mechanical Design and Verification Methodologies for Ceramic Structures”, which is to be concluded in the beginning of 2012, existing theories, technical state-of-the-art from international experts, and Astrium experience with probabilistic analysis tools have been synthesized into a comprehensive sizing and verification method for ceramics. Both classical deterministic and more optimised probabilistic methods are available, depending on the criticality of the item and on optimisation needs. The methodology, based on proven theory, has been successfully applied to demonstration cases and has shown its practical feasibility.
Using Bayes' theorem for free energy calculations
NASA Astrophysics Data System (ADS)
Rogers, David M.
Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.
Waters, Martha; McKernan, Lauralynn; Maier, Andrew; Jayjock, Michael; Schaeffer, Val; Brosseau, Lisa
2015-01-01
The fundamental goal of this article is to describe, define, and analyze the components of the risk characterization process for occupational exposures. Current methods are described for the probabilistic characterization of exposure, including newer techniques that have increasing applications for assessing data from occupational exposure scenarios. In addition, since the probability of health effects reflects variability in the exposure estimate as well as the dose-response curve—the integrated considerations of variability surrounding both components of the risk characterization provide greater information to the occupational hygienist. Probabilistic tools provide a more informed view of exposure as compared to use of discrete point estimates for these inputs to the risk characterization process. Active use of such tools for exposure and risk assessment will lead to a scientifically supported worker health protection program. Understanding the bases for an occupational risk assessment, focusing on important sources of variability and uncertainty enables characterizing occupational risk in terms of a probability, rather than a binary decision of acceptable risk or unacceptable risk. A critical review of existing methods highlights several conclusions: (1) exposure estimates and the dose-response are impacted by both variability and uncertainty and a well-developed risk characterization reflects and communicates this consideration; (2) occupational risk is probabilistic in nature and most accurately considered as a distribution, not a point estimate; and (3) occupational hygienists have a variety of tools available to incorporate concepts of risk characterization into occupational health and practice. PMID:26302336
The probability of seizures during EEG monitoring in critically ill adults
Westover, M. Brandon; Shafi, Mouhsin M.; Bianchi, Matt T.; Moura, Lidia M.V.R.; O’Rourke, Deirdre; Rosenthal, Eric S.; Chu, Catherine J.; Donovan, Samantha; Hoch, Daniel B.; Kilbride, Ronan D.; Cole, Andrew J.; Cash, Sydney S.
2014-01-01
Objective To characterize the risk for seizures over time in relation to EEG findings in hospitalized adults undergoing continuous EEG monitoring (cEEG). Methods Retrospective analysis of cEEG data and medical records from 625 consecutive adult inpatients monitored at a tertiary medical center. Using survival analysis methods, we estimated the time-dependent probability that a seizure will occur within the next 72-h, if no seizure has occurred yet, as a function of EEG abnormalities detected so far. Results Seizures occurred in 27% (168/625). The first seizure occurred early (<30 min of monitoring) in 58% (98/168). In 527 patients without early seizures, 159 (30%) had early epileptiform abnormalities, versus 368 (70%) without. Seizures were eventually detected in 25% of patients with early epileptiform discharges, versus 8% without early discharges. The 72-h risk of seizures declined below 5% if no epileptiform abnormalities were present in the first two hours, whereas 16 h of monitoring were required when epileptiform discharges were present. 20% (74/388) of patients without early epileptiform abnormalities later developed them; 23% (17/74) of these ultimately had seizures. Only 4% (12/294) experienced a seizure without preceding epileptiform abnormalities. Conclusions Seizure risk in acute neurological illness decays rapidly, at a rate dependent on abnormalities detected early during monitoring. This study demonstrates that substantial risk stratification is possible based on early EEG abnormalities. Significance These findings have implications for patient-specific determination of the required duration of cEEG monitoring in hospitalized patients. PMID:25082090
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Iliou, Katerina; Malenović, Anđelija; Loukas, Yannis L; Dotsikas, Yannis
2018-02-05
A novel Liquid Chromatography-tandem mass spectrometry (LC-MS/MS) method is presented for the quantitative determination of two potential genotoxic impurities (PGIs) in rabeprazole active pharmaceutical ingredient (API). In order to overcome the analytical challenges in the trace analysis of PGIs, a development procedure supported by Quality-by-Design (QbD) principles was evaluated. The efficient separation between rabeprazole and the two PGIs in the shortest analysis time was set as the defined analytical target profile (ATP) and to this purpose utilization of a switching valve allowed the flow to be sent to waste when rabeprazole was eluted. The selected critical quality attributes (CQAs) were the separation criterion s between the critical peak pair and the capacity factor k of the last eluted compound. The effect of the following critical process parameters (CPPs) on the CQAs was studied: %ACN content, the pH and the concentration of the buffer salt in the mobile phase, as well as the stationary phase of the analytical column. D-Optimal design was implemented to set the plan of experiments with UV detector. In order to define the design space, Monte Carlo simulations with 5000 iterations were performed. Acceptance criteria were met for C 8 column (50×4mm, 5μm) , and the region having probability π≥95% to achieve satisfactory values of all defined CQAs was computed. The working point was selected with the mobile phase consisting of ACN, ammonium formate 11mM at a ratio 31/69v/v with pH=6,8 for the water phase. The LC protocol was transferred to LC-MS/MS and validated according to ICH guidelines. Copyright © 2017 Elsevier B.V. All rights reserved.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Convergence of Transition Probability Matrix in CLVMarkov Models
NASA Astrophysics Data System (ADS)
Permana, D.; Pasaribu, U. S.; Indratno, S. W.; Suprayogi, S.
2018-04-01
A transition probability matrix is an arrangement of transition probability from one states to another in a Markov chain model (MCM). One of interesting study on the MCM is its behavior for a long time in the future. The behavior is derived from one property of transition probabilty matrix for n steps. This term is called the convergence of the n-step transition matrix for n move to infinity. Mathematically, the convergence of the transition probability matrix is finding the limit of the transition matrix which is powered by n where n moves to infinity. The convergence form of the transition probability matrix is very interesting as it will bring the matrix to its stationary form. This form is useful for predicting the probability of transitions between states in the future. The method usually used to find the convergence of transition probability matrix is through the process of limiting the distribution. In this paper, the convergence of the transition probability matrix is searched using a simple concept of linear algebra that is by diagonalizing the matrix.This method has a higher level of complexity because it has to perform the process of diagonalization in its matrix. But this way has the advantage of obtaining a common form of power n of the transition probability matrix. This form is useful to see transition matrix before stationary. For example cases are taken from CLV model using MCM called Model of CLV-Markov. There are several models taken by its transition probability matrix to find its convergence form. The result is that the convergence of the matrix of transition probability through diagonalization has similarity with convergence with commonly used distribution of probability limiting method.
Liu, Xian; Engel, Charles C
2012-12-20
Researchers often encounter longitudinal health data characterized with three or more ordinal or nominal categories. Random-effects multinomial logit models are generally applied to account for potential lack of independence inherent in such clustered data. When parameter estimates are used to describe longitudinal processes, however, random effects, both between and within individuals, need to be retransformed for correctly predicting outcome probabilities. This study attempts to go beyond existing work by developing a retransformation method that derives longitudinal growth trajectories of unbiased health probabilities. We estimated variances of the predicted probabilities by using the delta method. Additionally, we transformed the covariates' regression coefficients on the multinomial logit function, not substantively meaningful, to the conditional effects on the predicted probabilities. The empirical illustration uses the longitudinal data from the Asset and Health Dynamics among the Oldest Old. Our analysis compared three sets of the predicted probabilities of three health states at six time points, obtained from, respectively, the retransformation method, the best linear unbiased prediction, and the fixed-effects approach. The results demonstrate that neglect of retransforming random errors in the random-effects multinomial logit model results in severely biased longitudinal trajectories of health probabilities as well as overestimated effects of covariates on the probabilities. Copyright © 2012 John Wiley & Sons, Ltd.
Landslide Susceptibility Statistical Methods: A Critical and Systematic Literature Review
NASA Astrophysics Data System (ADS)
Mihir, Monika; Malamud, Bruce; Rossi, Mauro; Reichenbach, Paola; Ardizzone, Francesca
2014-05-01
Landslide susceptibility assessment, the subject of this systematic review, is aimed at understanding the spatial probability of slope failures under a set of geomorphological and environmental conditions. It is estimated that about 375 landslides that occur globally each year are fatal, with around 4600 people killed per year. Past studies have brought out the increasing cost of landslide damages which primarily can be attributed to human occupation and increased human activities in the vulnerable environments. Many scientists, to evaluate and reduce landslide risk, have made an effort to efficiently map landslide susceptibility using different statistical methods. In this paper, we do a critical and systematic landslide susceptibility literature review, in terms of the different statistical methods used. For each of a broad set of studies reviewed we note: (i) study geography region and areal extent, (ii) landslide types, (iii) inventory type and temporal period covered, (iv) mapping technique (v) thematic variables used (vi) statistical models, (vii) assessment of model skill, (viii) uncertainty assessment methods, (ix) validation methods. We then pulled out broad trends within our review of landslide susceptibility, particularly regarding the statistical methods. We found that the most common statistical methods used in the study of landslide susceptibility include logistic regression, artificial neural network, discriminant analysis and weight of evidence. Although most of the studies we reviewed assessed the model skill, very few assessed model uncertainty. In terms of geographic extent, the largest number of landslide susceptibility zonations were in Turkey, Korea, Spain, Italy and Malaysia. However, there are also many landslides and fatalities in other localities, particularly India, China, Philippines, Nepal and Indonesia, Guatemala, and Pakistan, where there are much fewer landslide susceptibility studies available in the peer-review literature. This raises some concern that existing studies do not always cover all the regions globally that currently experience landslides and landslide fatalities.
Approved Methods and Algorithms for DoD Risk-Based Explosives Siting
2007-02-02
glass. Pgha Probability of a person being in the glass hazard area Phit Probability of hit Phit (f) Probability of hit for fatality Phit (maji...Probability of hit for major injury Phit (mini) Probability of hit for minor injury Pi Debris probability densities at the ES PMaj (pair) Individual...combined high-angle and combined low-angle tables. A unique probability of hit is calculated for the three consequences of fatality, Phit (f), major injury
Surveillance guidelines for disease elimination: a case study of canine rabies.
Townsend, Sunny E; Lembo, Tiziana; Cleaveland, Sarah; Meslin, François X; Miranda, Mary Elizabeth; Putra, Anak Agung Gde; Haydon, Daniel T; Hampson, Katie
2013-05-01
Surveillance is a critical component of disease control programmes but is often poorly resourced, particularly in developing countries lacking good infrastructure and especially for zoonoses which require combined veterinary and medical capacity and collaboration. Here we examine how successful control, and ultimately disease elimination, depends on effective surveillance. We estimated that detection probabilities of <0.1 are broadly typical of rabies surveillance in endemic countries and areas without a history of rabies. Using outbreak simulation techniques we investigated how the probability of detection affects outbreak spread, and outcomes of response strategies such as time to control an outbreak, probability of elimination, and the certainty of declaring freedom from disease. Assuming realistically poor surveillance (probability of detection <0.1), we show that proactive mass dog vaccination is much more effective at controlling rabies and no more costly than campaigns that vaccinate in response to case detection. Control through proactive vaccination followed by 2 years of continuous monitoring and vaccination should be sufficient to guarantee elimination from an isolated area not subject to repeat introductions. We recommend that rabies control programmes ought to be able to maintain surveillance levels that detect at least 5% (and ideally 10%) of all cases to improve their prospects of eliminating rabies, and this can be achieved through greater intersectoral collaboration. Our approach illustrates how surveillance is critical for the control and elimination of diseases such as canine rabies and can provide minimum surveillance requirements and technical guidance for elimination programmes under a broad-range of circumstances. Copyright © 2012 Elsevier Ltd. All rights reserved.
3D abnormal behavior recognition in power generation
NASA Astrophysics Data System (ADS)
Wei, Zhenhua; Li, Xuesen; Su, Jie; Lin, Jie
2011-06-01
So far most research of human behavior recognition focus on simple individual behavior, such as wave, crouch, jump and bend. This paper will focus on abnormal behavior with objects carrying in power generation. Such as using mobile communication device in main control room, taking helmet off during working and lying down in high place. Taking account of the color and shape are fixed, we adopted edge detecting by color tracking to recognize object in worker. This paper introduces a method, which using geometric character of skeleton and its angle to express sequence of three-dimensional human behavior data. Then adopting Semi-join critical step Hidden Markov Model, weighing probability of critical steps' output to reduce the computational complexity. Training model for every behavior, mean while select some skeleton frames from 3D behavior sample to form a critical step set. This set is a bridge linking 2D observation behavior with 3D human joints feature. The 3D reconstruction is not required during the 2D behavior recognition phase. In the beginning of recognition progress, finding the best match for every frame of 2D observed sample in 3D skeleton set. After that, 2D observed skeleton frames sample will be identified as a specifically 3D behavior by behavior-classifier. The effectiveness of the proposed algorithm is demonstrated with experiments in similar power generation environment.
Network congestion control algorithm based on Actor-Critic reinforcement learning model
NASA Astrophysics Data System (ADS)
Xu, Tao; Gong, Lina; Zhang, Wei; Li, Xuhong; Wang, Xia; Pan, Wenwen
2018-04-01
Aiming at the network congestion control problem, a congestion control algorithm based on Actor-Critic reinforcement learning model is designed. Through the genetic algorithm in the congestion control strategy, the network congestion problems can be better found and prevented. According to Actor-Critic reinforcement learning, the simulation experiment of network congestion control algorithm is designed. The simulation experiments verify that the AQM controller can predict the dynamic characteristics of the network system. Moreover, the learning strategy is adopted to optimize the network performance, and the dropping probability of packets is adaptively adjusted so as to improve the network performance and avoid congestion. Based on the above finding, it is concluded that the network congestion control algorithm based on Actor-Critic reinforcement learning model can effectively avoid the occurrence of TCP network congestion.
Use of simulation to compare the performance of minimization with stratified blocked randomization.
Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John
2009-01-01
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.
A critical look at prospective surveillance using a scan statistic.
Correa, Thais R; Assunção, Renato M; Costa, Marcelo A
2015-03-30
The scan statistic is a very popular surveillance technique for purely spatial, purely temporal, and spatial-temporal disease data. It was extended to the prospective surveillance case, and it has been applied quite extensively in this situation. When the usual signal rules, as those implemented in SaTScan(TM) (Boston, MA, USA) software, are used, we show that the scan statistic method is not appropriate for the prospective case. The reason is that it does not adjust properly for the sequential and repeated tests carried out during the surveillance. We demonstrate that the nominal significance level α is not meaningful and there is no relationship between α and the recurrence interval or the average run length (ARL). In some cases, the ARL may be equal to ∞, which makes the method ineffective. This lack of control of the type-I error probability and of the ARL leads us to strongly oppose the use of the scan statistic with the usual signal rules in the prospective context. Copyright © 2014 John Wiley & Sons, Ltd.
Reliability of programs specified with equational specifications
NASA Astrophysics Data System (ADS)
Nikolik, Borislav
Ultrareliability is desirable (and sometimes a demand of regulatory authorities) for safety-critical applications, such as commercial flight-control programs, medical applications, nuclear reactor control programs, etc. A method is proposed, called the Term Redundancy Method (TRM), for obtaining ultrareliable programs through specification-based testing. Current specification-based testing schemes need a prohibitively large number of testcases for estimating ultrareliability. They assume availability of an accurate program-usage distribution prior to testing, and they assume the availability of a test oracle. It is shown how to obtain ultrareliable programs (probability of failure near zero) with a practical number of testcases, without accurate usage distribution, and without a test oracle. TRM applies to the class of decision Abstract Data Type (ADT) programs specified with unconditional equational specifications. TRM is restricted to programs that do not exceed certain efficiency constraints in generating testcases. The effectiveness of TRM in failure detection and recovery is demonstrated on formulas from the aircraft collision avoidance system TCAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Beibei; Zhang, Xiaojia; Lin, Douglas N. C.
2015-01-01
Nearly 15%-20% of solar type stars contain one or more gas giant planets. According to the core-accretion scenario, the acquisition of their gaseous envelope must be preceded by the formation of super-critical cores with masses 10 times or larger than that of the Earth. It is natural to link the formation probability of gas giant planets with the supply of gases and solids in their natal disks. However, a much richer population of super Earths suggests that (1) there is no shortage of planetary building block material, (2) a gas giant's growth barrier is probably associated with whether it can mergemore » into super-critical cores, and (3) super Earths are probably failed cores that did not attain sufficient mass to initiate efficient accretion of gas before it is severely depleted. Here we construct a model based on the hypothesis that protoplanetary embryos migrated extensively before they were assembled into bona fide planets. We construct a Hermite-Embryo code based on a unified viscous-irradiation disk model and a prescription for the embryo-disk tidal interaction. This code is used to simulate the convergent migration of embryos, and their close encounters and coagulation. Around the progenitors of solar-type stars, the progenitor super-critical-mass cores of gas giant planets primarily form in protostellar disks with relatively high (≳ 10{sup –7} M {sub ☉} yr{sup –1}) mass accretion rates, whereas systems of super Earths (failed cores) are more likely to emerge out of natal disks with modest mass accretion rates, due to the mean motion resonance barrier and retention efficiency.« less
MO-FG-CAMPUS-TeP2-04: Optimizing for a Specified Target Coverage Probability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, A
2016-06-15
Purpose: The purpose of this work is to develop a method for inverse planning of radiation therapy margins. When using this method the user specifies a desired target coverage probability and the system optimizes to meet the demand without any explicit specification of margins to handle setup uncertainty. Methods: The method determines which voxels to include in an optimization function promoting target coverage in order to achieve a specified target coverage probability. Voxels are selected in a way that retains the correlation between them: The target is displaced according to the setup errors and the voxels to include are selectedmore » as the union of the displaced target regions under the x% best scenarios according to some quality measure. The quality measure could depend on the dose to the considered structure alone or could depend on the dose to multiple structures in order to take into account correlation between structures. Results: A target coverage function was applied to the CTV of a prostate case with prescription 78 Gy and compared to conventional planning using a DVH function on the PTV. Planning was performed to achieve 90% probability of CTV coverage. The plan optimized using the coverage probability function had P(D98 > 77.95 Gy) = 0.97 for the CTV. The PTV plan using a constraint on minimum DVH 78 Gy at 90% had P(D98 > 77.95) = 0.44 for the CTV. To match the coverage probability optimization, the DVH volume parameter had to be increased to 97% which resulted in 0.5 Gy higher average dose to the rectum. Conclusion: Optimizing a target coverage probability is an easily used method to find a margin that achieves the desired coverage probability. It can lead to reduced OAR doses at the same coverage probability compared to planning with margins and DVH functions.« less