The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.
2015-01-01
Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1986-01-01
Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Objective Analysis of Oceanic Data for Coast Guard Trajectory Models Phase II
1997-12-01
as outliers depends on the desired probability of false alarm, Pfa values, which is the probability of marking a valid point as an outlier. Table 2-2...constructed to minimize the mean-squared prediction error of the grid point estimate under the constraint that the estimate is unbiased . The...prediction error, e= Zl(S) _oizl(Si)+oC1iZz(S) (2.44) subject to the constraints of unbiasedness , • c/1 = 1,and (2.45) i SCC12 = 0. (2.46) Denoting
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, M. P.; Lawler, J. E.; Sneden, C.
2013-10-01
Atomic transition probability measurements for 364 lines of Ti II in the UV through near-IR are reported. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a new echelle spectrometer are combined with published radiative lifetimes to determine these transition probabilities. The new results are in generally good agreement with previously reported FTS measurements. Use of the new echelle spectrometer, independent radiometric calibration methods, and independent data analysis routines enables a reduction of systematic errors and overall improvement in transition probability accuracy over previous measurements. The new Ti II data are applied to high-resolution visible and UVmore » spectra of the Sun and metal-poor star HD 84937 to derive new, more accurate Ti abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. The Ti abundances derived using Ti II for these two stars match those derived using Ti I and support the relative Ti/Fe abundance ratio versus metallicity seen in previous studies.« less
A robust algorithm for automated target recognition using precomputed radar cross sections
NASA Astrophysics Data System (ADS)
Ehrman, Lisa M.; Lanterman, Aaron D.
2004-09-01
Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.
Sample-size needs for forestry herbicide trials
S.M. Zedaker; T.G. Gregoire; James H. Miller
1994-01-01
Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...
Multimedia data from two probability-based exposure studies were investigated in terms of how missing data and measurement-error imprecision affected estimation of population parameters and associations. Missing data resulted mainly from individuals' refusing to participate in c...
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
Experimental transition probabilities for Mn II spectral lines
NASA Astrophysics Data System (ADS)
Manrique, J.; Aguilera, J. A.; Aragón, C.
2018-06-01
Transition probabilities for 46 spectral lines of Mn II with wavelengths in the range 2000-3500 Å have been measured by CSigma laser-induced breakdown spectroscopy (Cσ-LIBS). For 28 of the lines, experimental data had not been reported previously. The Cσ-LIBS method, based in the construction of generalized curves of growth called Cσ graphs, avoids the error due to self-absorption. The samples used to generate the laser-induced plasmas are fused glass disks prepared from pure MnO. The Mn concentrations in the samples and the lines included in the study are selected to ensure the validity of the model of homogeneous plasma used. The results are compared to experimental and theoretical values available in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, M. P.; Lawler, J. E.; Den Hartog, E. A.
2014-10-01
New experimental absolute atomic transition probabilities are reported for 203 lines of V II. Branching fractions are measured from spectra recorded using a Fourier transform spectrometer and an echelle spectrometer. The branching fractions are normalized with radiative lifetime measurements to determine the new transition probabilities. Generally good agreement is found between this work and previously reported V II transition probabilities. Two spectrometers, independent radiometric calibration methods, and independent data analysis routines enable a reduction in systematic uncertainties, in particular those due to optical depth errors. In addition, new hyperfine structure constants are measured for selected levels by least squares fittingmore » line profiles in the FTS spectra. The new V II data are applied to high resolution visible and UV spectra of the Sun and metal-poor star HD 84937 to determine new, more accurate V abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. Very good agreement is found between our new solar photospheric V abundance, log ε(V) = 3.95 from 15 V II lines, and the solar-system meteoritic value. In HD 84937, we derive [V/H] = –2.08 from 68 lines, leading to a value of [V/Fe] = 0.24.« less
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A
2018-01-01
Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Low power and type II errors in recent ophthalmology research.
Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P
2016-10-01
To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
NASA Astrophysics Data System (ADS)
Fantola Lazzarini, Anna L.; Lazzarini, Ennio
The o-Ps quenching reactions promoted in aqueous solutions by the following six cyanocomplexes: [Fe(CN) 6] 4-; [Co(CN) 6] 3-; [Zn(CN) 4] 2-; [Cd(CN) 6] 2-; [Fe(CN) 6] 3-; [Ni(CN) 4] 2- were investigated. The first four reactions probably consist in o-Ps addition across the CN bond, their rate constants at room temperature, Tr, being ⩽(0.04±0.02) × 10 9 M -1 s -1, i.e. almost at the limit of experimental errors. The rate constant of the fifth reaction, in o-Ps oxydation, at Tr is (20.3±0.4) × 10 9 M -1 s -1. The [Ni(CN) 4] 2-k value at Tr, is (0.27±0.01) × 10 9 M -1 s -1, i.e. 100 times less than the rate constants of o-Ps oxydation, but 10 times larger than those of the o-Ps addition across the CN bond. The [Ni(CN) 4] 2- reaction probably results in formation of the following positronido complex: [Ni(CN) 4Ps] 2-. However, it is worth noting that the existence of such a complex is only indirectly deduced. In fact it arises from comparison of the [Ni(CN) 4] 2- rate constant with those of the Fe(II), Zn(II), Cd(II), and Co(III) cyanocomplexes, which, like the Ni(II) cyanocomplex, do not promote o-Ps oxydation or spin exchange reactions.
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
Neyman-Pearson classification algorithms and NP receiver operating characteristics
Tong, Xin; Feng, Yang; Li, Jingyi Jessica
2018-01-01
In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442
Neyman-Pearson classification algorithms and NP receiver operating characteristics.
Tong, Xin; Feng, Yang; Li, Jingyi Jessica
2018-02-01
In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.
Array coding for large data memories
NASA Technical Reports Server (NTRS)
Tranter, W. H.
1982-01-01
It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.
Mixed response and time-to-event endpoints for multistage single-arm phase II design.
Lai, Xin; Zee, Benny Chung-Ying
2015-06-04
The objective of phase II cancer clinical trials is to determine if a treatment has sufficient activity to warrant further study. The efficiency of a conventional phase II trial design has been the object of considerable debate, particularly when the study regimen is characteristically cytostatic. At the time of development of a phase II cancer trial, we accumulated clinical experience regarding the time to progression (TTP) for similar classes of drugs and for standard therapy. By considering the time to event (TTE) in addition to the tumor response endpoint, a mixed-endpoint phase II design may increase the efficiency and ability of selecting promising cytotoxic and cytostatic agents for further development. We proposed a single-arm phase II trial design by extending the Zee multinomial method to fully use mixed endpoints with tumor response and the TTE. In this design, the dependence between the probability of response and the TTE outcome is modeled through a Gaussian copula. Given the type I and type II errors and the hypothesis as defined by the response rate (RR) and median TTE, such as median TTP, the decision rules for a two-stage phase II trial design can be generated. We demonstrated through simulation that the proposed design has a smaller expected sample size and higher early stopping probability under the null hypothesis than designs based on a single-response endpoint or a single TTE endpoint. The proposed design is more efficient for screening new cytotoxic or cytostatic agents and less likely to miss an effective agent than the alternative single-arm design.
Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A
2016-01-01
Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler
The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies
Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.
2017-01-01
Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.
ERIC Educational Resources Information Center
O'Connell, Ann Aileen
The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…
Cruikshank, Benjamin; Jacobs, Kurt
2017-07-21
von Neumann's classic "multiplexing" method is unique in achieving high-threshold fault-tolerant classical computation (FTCC), but has several significant barriers to implementation: (i) the extremely complex circuits required by randomized connections, (ii) the difficulty of calculating its performance in practical regimes of both code size and logical error rate, and (iii) the (perceived) need for large code sizes. Here we present numerical results indicating that the third assertion is false, and introduce a novel scheme that eliminates the two remaining problems while retaining a threshold very close to von Neumann's ideal of 1/6. We present a simple, highly ordered wiring structure that vastly reduces the circuit complexity, demonstrates that randomization is unnecessary, and provides a feasible method to calculate the performance. This in turn allows us to show that the scheme requires only moderate code sizes, vastly outperforms concatenation schemes, and under a standard error model a unitary implementation realizes universal FTCC with an accuracy threshold of p<5.5%, in which p is the error probability for 3-qubit gates. FTCC is a key component in realizing measurement-free protocols for quantum information processing. In view of this, we use our scheme to show that all-unitary quantum circuits can reproduce any measurement-based feedback process in which the asymptotic error probabilities for the measurement and feedback are (32/63)p≈0.51p and 1.51p, respectively.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
A modified varying-stage adaptive phase II/III clinical trial design.
Dong, Gaohong; Vandemeulebroecke, Marc
2016-07-01
Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of "failure modes and effects analysis" (FMEA). In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members' decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
Does a better model yield a better argument? An info-gap analysis
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2017-04-01
Theories, models and computations underlie reasoned argumentation in many areas. The possibility of error in these arguments, though of low probability, may be highly significant when the argument is used in predicting the probability of rare high-consequence events. This implies that the choice of a theory, model or computational method for predicting rare high-consequence events must account for the probability of error in these components. However, error may result from lack of knowledge or surprises of various sorts, and predicting the probability of error is highly uncertain. We show that the putatively best, most innovative and sophisticated argument may not actually have the lowest probability of error. Innovative arguments may entail greater uncertainty than more standard but less sophisticated methods, creating an innovation dilemma in formulating the argument. We employ info-gap decision theory to characterize and support the resolution of this problem and present several examples.
Brezinski, M E
2018-01-01
Optical coherence tomography has become an important imaging technology in cardiology and ophthalmology, with other applications under investigations. Major advances in optical coherence tomography (OCT) imaging are likely to occur through a quantum field approach to the technology. In this paper, which is the first part in a series on the topic, the quantum basis of OCT first order correlations is expressed in terms of full field quantization. Specifically first order correlations are treated as the linear sum of single photon interferences along indistinguishable paths. Photons and the electromagnetic (EM) field are described in terms of quantum harmonic oscillators. While the author feels the study of quantum second order correlations will lead to greater paradigm shifts in the field, addressed in part II, advances from the study of quantum first order correlations are given. In particular, ranging errors are discussed (with remedies) from vacuum fluctuations through the detector port, photon counting errors, and position probability amplitude uncertainty. In addition, the principles of quantum field theory and first order correlations are needed for studying second order correlations in part II.
Fast converging minimum probability of error neural network receivers for DS-CDMA communications.
Matyjas, John D; Psaromiligkos, Ioannis N; Batalama, Stella N; Medley, Michael J
2004-03-01
We consider a multilayer perceptron neural network (NN) receiver architecture for the recovery of the information bits of a direct-sequence code-division-multiple-access (DS-CDMA) user. We develop a fast converging adaptive training algorithm that minimizes the bit-error rate (BER) at the output of the receiver. The adaptive algorithm has three key features: i) it incorporates the BER, i.e., the ultimate performance evaluation measure, directly into the learning process, ii) it utilizes constraints that are derived from the properties of the optimum single-user decision boundary for additive white Gaussian noise (AWGN) multiple-access channels, and iii) it embeds importance sampling (IS) principles directly into the receiver optimization process. Simulation studies illustrate the BER performance of the proposed scheme.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
Malone, Amelia S.; Fuchs, Lynn S.
2016-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of committing errors. Students (n = 227) completed a 9-item ordering test. A high proportion (81%) of problems were completed incorrectly. Most (65% of) errors were due to students misapplying whole number logic to fractions. Fraction-magnitude estimation skill, but not part-whole understanding, significantly predicted the probability of committing this type of error. Implications for practice are discussed. PMID:26966153
1994-03-01
labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would
Zhang, Yequn; Djordjevic, Ivan B; Gao, Xin
2012-08-01
Inspired by recent demonstrations of orbital angular momentum-(OAM)-based single-photon communications, we propose two quantum-channel models: (i) the multidimensional quantum-key distribution model and (ii) the quantum teleportation model. Both models employ operator-sum representation for Kraus operators derived from OAM eigenkets transition probabilities. These models are highly important for future development of quantum-error correction schemes to extend the transmission distance and improve date rates of OAM quantum communications. By using these models, we calculate corresponding quantum-channel capacities in the presence of atmospheric turbulence.
Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty
NASA Astrophysics Data System (ADS)
Zhu, Yi-Chen; Au, Siu-Kui
2018-01-01
A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Probability of undetected error after decoding for a concatenated coding scheme
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for NASA telecommand system is analyzed.
McClintock, Brett T.; Bailey, Larissa L.; Pollock, Kenneth H.; Simons, Theodore R.
2010-01-01
The recent surge in the development and application of species occurrence models has been associated with an acknowledgment among ecologists that species are detected imperfectly due to observation error. Standard models now allow unbiased estimation of occupancy probability when false negative detections occur, but this is conditional on no false positive detections and sufficient incorporation of explanatory variables for the false negative detection process. These assumptions are likely reasonable in many circumstances, but there is mounting evidence that false positive errors and detection probability heterogeneity may be much more prevalent in studies relying on auditory cues for species detection (e.g., songbird or calling amphibian surveys). We used field survey data from a simulated calling anuran system of known occupancy state to investigate the biases induced by these errors in dynamic models of species occurrence. Despite the participation of expert observers in simplified field conditions, both false positive errors and site detection probability heterogeneity were extensive for most species in the survey. We found that even low levels of false positive errors, constituting as little as 1% of all detections, can cause severe overestimation of site occupancy, colonization, and local extinction probabilities. Further, unmodeled detection probability heterogeneity induced substantial underestimation of occupancy and overestimation of colonization and local extinction probabilities. Completely spurious relationships between species occurrence and explanatory variables were also found. Such misleading inferences would likely have deleterious implications for conservation and management programs. We contend that all forms of observation error, including false positive errors and heterogeneous detection probabilities, must be incorporated into the estimation framework to facilitate reliable inferences about occupancy and its associated vital rate parameters.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler
2016-12-01
The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-15
....gov/acs/www/ or contact the Census Bureau's Social, Economic, and Housing Statistics Division at (301...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the... direction; and (2) Sampling Error, which consists of the error that arises from the use of probability...
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
1983-03-01
AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for
A Quantum Theoretical Explanation for Probability Judgment Errors
ERIC Educational Resources Information Center
Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.
2011-01-01
A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
Brezinski, ME
2018-01-01
Optical coherence tomography has become an important imaging technology in cardiology and ophthalmology, with other applications under investigations. Major advances in optical coherence tomography (OCT) imaging are likely to occur through a quantum field approach to the technology. In this paper, which is the first part in a series on the topic, the quantum basis of OCT first order correlations is expressed in terms of full field quantization. Specifically first order correlations are treated as the linear sum of single photon interferences along indistinguishable paths. Photons and the electromagnetic (EM) field are described in terms of quantum harmonic oscillators. While the author feels the study of quantum second order correlations will lead to greater paradigm shifts in the field, addressed in part II, advances from the study of quantum first order correlations are given. In particular, ranging errors are discussed (with remedies) from vacuum fluctuations through the detector port, photon counting errors, and position probability amplitude uncertainty. In addition, the principles of quantum field theory and first order correlations are needed for studying second order correlations in part II. PMID:29863177
Chen, Cong; Beckman, Robert A
2009-01-01
This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.
The effect of timing errors in optical digital systems.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1972-01-01
The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.
Nematode Damage Functions: The Problems of Experimental and Sampling Error
Ferris, H.
1984-01-01
The development and use of pest damage functions involves measurement and experimental errors associated with cultural, environmental, and distributional factors. Damage predictions are more valuable if considered with associated probability. Collapsing population densities into a geometric series of population classes allows a pseudo-replication removal of experimental and sampling error in damage function development. Recognition of the nature of sampling error for aggregated populations allows assessment of probability associated with the population estimate. The product of the probabilities incorporated in the damage function and in the population estimate provides a basis for risk analysis of the yield loss prediction and the ensuing management decision. PMID:19295865
More on the decoder error probability for Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Cheung, K.-M.
1987-01-01
The decoder error probability for Reed-Solomon codes (more generally, linear maximum distance separable codes) is examined. McEliece and Swanson offered an upper bound on P sub E (u), the decoder error probability given that u symbol errors occurs. This upper bound is slightly greater than Q, the probability that a completely random error pattern will cause decoder error. By using a combinatoric technique, the principle of inclusion and exclusion, an exact formula for P sub E (u) is derived. The P sub e (u) for the (255, 223) Reed-Solomon Code used by NASA, and for the (31,15) Reed-Solomon code (JTIDS code), are calculated using the exact formula, and the P sub E (u)'s are observed to approach the Q's of the codes rapidly as u gets larger. An upper bound for the expression is derived, and is shown to decrease nearly exponentially as u increases. This proves analytically that P sub E (u) indeed approaches Q as u becomes large, and some laws of large numbers come into play.
Drought Persistence Errors in Global Climate Models
NASA Astrophysics Data System (ADS)
Moon, H.; Gudmundsson, L.; Seneviratne, S. I.
2018-04-01
The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
U.S. Maternally Linked Birth Records May Be Biased for Hispanics and Other Population Groups
LEISS, JACK K.; GILES, DENISE; SULLIVAN, KRISTIN M.; MATHEWS, RAHEL; SENTELLE, GLENDA; TOMASHEK, KAY M.
2010-01-01
Purpose To advance understanding of linkage error in U.S. maternally linked datasets, and how the error may affect results of studies based on the linked data. Methods North Carolina birth and fetal death records for 1988-1997 were maternally linked (n=1,030,029). The maternal set probability, defined as the probability that all records assigned to the same maternal set do in fact represent events to the same woman, was used to assess differential maternal linkage error across race/ethnic groups. Results Maternal set probabilities were lower for records specifying Asian or Hispanic race/ethnicity, suggesting greater maternal linkage error. The lower probabilities for Hispanics were concentrated in women of Mexican origin who were not born in the United States. Conclusions Differential maternal linkage error may be a source of bias in studies using U.S. maternally linked datasets to make comparisons between Hispanics and other groups or among Hispanic subgroups. Methods to quantify and adjust for this potential bias are needed. PMID:20006273
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Roeder, William P.
2010-01-01
The expected peak wind speed for the day is an important element in the daily morning forecast for ground and space launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron (45 WS) must issue forecast advisories for KSC/CCAFS when they expect peak gusts for >= 25, >= 35, and >= 50 kt thresholds at any level from the surface to 300 ft. In Phase I of this task, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a cool-season (October - April) tool to help forecast the non-convective peak wind from the surface to 300 ft at KSC/CCAFS. During the warm season, these wind speeds are rarely exceeded except during convective winds or under the influence of tropical cyclones, for which other techniques are already in use. The tool used single and multiple linear regression equations to predict the peak wind from the morning sounding. The forecaster manually entered several observed sounding parameters into a Microsoft Excel graphical user interface (GUI), and then the tool displayed the forecast peak wind speed, average wind speed at the time of the peak wind, the timing of the peak wind and the probability the peak wind will meet or exceed 35, 50 and 60 kt. The 45 WS customers later dropped the requirement for >= 60 kt wind warnings. During Phase II of this task, the AMU expanded the period of record (POR) by six years to increase the number of observations used to create the forecast equations. A large number of possible predictors were evaluated from archived soundings, including inversion depth and strength, low-level wind shear, mixing height, temperature lapse rate and winds from the surface to 3000 ft. Each day in the POR was stratified in a number of ways, such as by low-level wind direction, synoptic weather pattern, precipitation and Bulk Richardson number. The most accurate Phase II equations were then selected for an independent verification. The Phase I and II forecast methods were compared using an independent verification data set. The two methods were compared to climatology, wind warnings and advisories issued by the 45 WS, and North American Mesoscale (NAM) model (MesoNAM) forecast winds. The performance of the Phase I and II methods were similar with respect to mean absolute error. Since the Phase I data were not stratified by precipitation, this method's peak wind forecasts had a large negative bias on days with precipitation and a small positive bias on days with no precipitation. Overall, the climatology methods performed the worst while the MesoNAM performed the best. Since the MesoNAM winds were the most accurate in the comparison, the final version of the tool was based on the MesoNAM winds. The probability the peak wind will meet or exceed the warning thresholds were based on the one standard deviation error bars from the linear regression. For example, the linear regression might forecast the most likely peak speed to be 35 kt and the error bars used to calculate that the probability of >= 25 kt = 76%, the probability of >= 35 kt = 50%, and the probability of >= 50 kt = 19%. The authors have not seen this application of linear regression error bars in any other meteorological applications. Although probability forecast tools should usually be developed with logistic regression, this technique could be easily generalized to any linear regression forecast tool to estimate the probability of exceeding any desired threshold . This could be useful for previously developed linear regression forecast tools or new forecast applications where statistical analysis software to perform logistic regression is not available. The tool was delivered in two formats - a Microsoft Excel GUI and a Tool Command Language/Tool Kit (Tcl/Tk) GUI in the Meteorological Interactive Data Display System (MIDDS). The Microsoft Excel GUI reads a MesoNAM text file containing hourly forecasts from 0 to 84 hours, from one model run (00 or 12 UTC). The GUI then displays e peak wind speed, average wind speed, and the probability the peak wind will meet or exceed the 25-, 35- and 50-kt thresholds. The user can display the Day-1 through Day-3 peak wind forecasts, and separate forecasts are made for precipitation and non-precipitation days. The MIDDS GUI uses data from the NAM and Global Forecast System (GFS), instead of the MesoNAM. It can display Day-1 and Day-2 forecasts using NAM data, and Day-1 through Day-5 forecasts using GFS data. The timing of the peak wind is not displayed, since the independent verification showed that none of the forecast methods performed significantly better than climatology. The forecaster should use the climatological timing of the peak wind (2248 UTC) as a first guess and then adjust it based on the movement of weather features.
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia Schneider; Fuchs, Lynn S.
2015-01-01
The 3 purposes of this study were to: (a) describe fraction ordering errors among at-risk 4th-grade students; (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors; and (c) examine the effect of students' ability to explain comparing problems on the probability of…
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
Entanglement-enhanced Neyman-Pearson target detection using quantum illumination
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Household Economic Statistics Division at (301) 763-3243. Under the advice of the Census Bureau, HHS..., which consists of the error that arises from the use of probability sampling to create the sample. For...) Sampling Error, which consists of the error that arises from the use of probability sampling to create the...
Ennis, Erin J; Foley, Joe P
2016-07-15
A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R., E-mail: koenraad.audenaert@rhul.ac.uk; Department of Physics and Astronomy, University of Ghent, S9, Krijgslaan 281, B-9000 Ghent; Mosonyi, Milán, E-mail: milan.mosonyi@gmail.com
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ₁, …, σ{sub r}. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ₁, …, σ{sub r}), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov'smore » classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min{sub j« less
NASA Astrophysics Data System (ADS)
Miller, Jacob; Sanders, Stephen; Miyake, Akimasa
2017-12-01
While quantum speed-up in solving certain decision problems by a fault-tolerant universal quantum computer has been promised, a timely research interest includes how far one can reduce the resource requirement to demonstrate a provable advantage in quantum devices without demanding quantum error correction, which is crucial for prolonging the coherence time of qubits. We propose a model device made of locally interacting multiple qubits, designed such that simultaneous single-qubit measurements on it can output probability distributions whose average-case sampling is classically intractable, under similar assumptions as the sampling of noninteracting bosons and instantaneous quantum circuits. Notably, in contrast to these previous unitary-based realizations, our measurement-based implementation has two distinctive features. (i) Our implementation involves no adaptation of measurement bases, leading output probability distributions to be generated in constant time, independent of the system size. Thus, it could be implemented in principle without quantum error correction. (ii) Verifying the classical intractability of our sampling is done by changing the Pauli measurement bases only at certain output qubits. Our usage of random commuting quantum circuits in place of computationally universal circuits allows a unique unification of sampling and verification, so they require the same physical resource requirements in contrast to the more demanding verification protocols seen elsewhere in the literature.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
NASA Astrophysics Data System (ADS)
Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai
2016-03-01
The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Analysis of the impact of error detection on computer performance
NASA Technical Reports Server (NTRS)
Shin, K. C.; Lee, Y. H.
1983-01-01
Conventionally, reliability analyses either assume that a fault/error is detected immediately following its occurrence, or neglect damages caused by latent errors. Though unrealistic, this assumption was imposed in order to avoid the difficulty of determining the respective probabilities that a fault induces an error and the error is then detected in a random amount of time after its occurrence. As a remedy for this problem a model is proposed to analyze the impact of error detection on computer performance under moderate assumptions. Error latency, the time interval between occurrence and the moment of detection, is used to measure the effectiveness of a detection mechanism. This model is used to: (1) predict the probability of producing an unreliable result, and (2) estimate the loss of computation due to fault and/or error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1973-01-01
The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
Performance of cellular frequency-hopped spread-spectrum radio networks
NASA Astrophysics Data System (ADS)
Gluck, Jeffrey W.; Geraniotis, Evaggelos
1989-10-01
Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Cheng, Jin-Mei; Li, Jian; Tang, Ji-Xin; Hao, Xiao-Xia; Wang, Zhi-Peng; Sun, Tie-Cheng; Wang, Xiu-Xia; Zhang, Yan; Chen, Su-Ren; Liu, Yi-Xun
2017-08-03
Mammalian oocyte chromosomes undergo 2 meiotic divisions to generate haploid gametes. The frequency of chromosome segregation errors during meiosis I increase with age. However, little attention has been paid to the question of how aging affects sister chromatid segregation during oocyte meiosis II. More importantly, how aneuploid metaphase II (MII) oocytes from aged mice evade the spindle assembly checkpoint (SAC) mechanism to complete later meiosis II to form aneuploid embryos remains unknown. Here, we report that MII oocytes from naturally aged mice exhibited substantial errors in chromosome arrangement and configuration compared with young MII oocytes. Interestingly, these errors in aged oocytes had no impact on anaphase II onset and completion as well as 2-cell formation after parthenogenetic activation. Further study found that merotelic kinetochore attachment occurred more frequently and could stabilize the kinetochore-microtubule interaction to ensure SAC inactivation and anaphase II onset in aged MII oocytes. This orientation could persist largely during anaphase II in aged oocytes, leading to severe chromosome lagging and trailing as well as delay of anaphase II completion. Therefore, merotelic kinetochore attachment in oocyte meiosis II exacerbates age-related genetic instability and is a key source of age-dependent embryo aneuploidy and dysplasia.
Predicting forest insect flight activity: A Bayesian network approach
Pawson, Stephen M.; Marcot, Bruce G.; Woodberry, Owen G.
2017-01-01
Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model’s predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways. PMID:28953904
Metacontrast masking and attention do not interact.
Agaoglu, Sevda; Breitmeyer, Bruno; Ogmen, Haluk
2016-07-01
Visual masking and attention have been known to control the transfer of information from sensory memory to visual short-term memory. A natural question is whether these processes operate independently or interact. Recent evidence suggests that studies that reported interactions between masking and attention suffered from ceiling and/or floor effects. The objective of the present study was to investigate whether metacontrast masking and attention interact by using an experimental design in which saturation effects are avoided. We asked observers to report the orientation of a target bar randomly selected from a display containing either two or six bars. The mask was a ring that surrounded the target bar. Attentional load was controlled by set-size and masking strength by the stimulus onset asynchrony between the target bar and the mask ring. We investigated interactions between masking and attention by analyzing two different aspects of performance: (i) the mean absolute response errors and (ii) the distribution of signed response errors. Our results show that attention affects observers' performance without interacting with masking. Statistical modeling of response errors suggests that attention and metacontrast masking exert their effects by independently modulating the probability of "guessing" behavior. Implications of our findings for models of attention are discussed.
NASA Astrophysics Data System (ADS)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Modulation/demodulation techniques for satellite communications. Part 1: Background
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1981-01-01
Basic characteristics of digital data transmission systems described include the physical communication links, the notion of bandwidth, FCC regulations, and performance measurements such as bit rates, bit error probabilities, throughputs, and delays. The error probability performance and spectral characteristics of various modulation/demodulation techniques commonly used or proposed for use in radio and satellite communication links are summarized. Forward error correction with block or convolutional codes is also discussed along with the important coding parameter, channel cutoff rate.
MRI-guided prostate focal laser ablation therapy using a mechatronic needle guidance system
NASA Astrophysics Data System (ADS)
Cepek, Jeremy; Lindner, Uri; Ghai, Sangeet; Davidson, Sean R. H.; Trachtenberg, John; Fenster, Aaron
2014-03-01
Focal therapy of localized prostate cancer is receiving increased attention due to its potential for providing effective cancer control in select patients with minimal treatment-related side effects. Magnetic resonance imaging (MRI)-guided focal laser ablation (FLA) therapy is an attractive modality for such an approach. In FLA therapy, accurate placement of laser fibers is critical to ensuring that the full target volume is ablated. In practice, error in needle placement is invariably present due to pre- to intra-procedure image registration error, needle deflection, prostate motion, and variability in interventionalist skill. In addition, some of these sources of error are difficult to control, since the available workspace and patient positions are restricted within a clinical MRI bore. In an attempt to take full advantage of the utility of intraprocedure MRI, while minimizing error in needle placement, we developed an MRI-compatible mechatronic system for guiding needles to the prostate for FLA therapy. The system has been used to place interstitial catheters for MRI-guided FLA therapy in eight subjects in an ongoing Phase I/II clinical trial. Data from these cases has provided quantification of the level of uncertainty in needle placement error. To relate needle placement error to clinical outcome, we developed a model for predicting the probability of achieving complete focal target ablation for a family of parameterized treatment plans. Results from this work have enabled the specification of evidence-based selection criteria for the maximum target size that can be confidently ablated using this technique, and quantify the benefit that may be gained with improvements in needle placement accuracy.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Probability shapes perceptual precision: A study in orientation estimation.
Jabar, Syaheed B; Anderson, Britt
2015-12-01
Probability is known to affect perceptual estimations, but an understanding of mechanisms is lacking. Moving beyond binary classification tasks, we had naive participants report the orientation of briefly viewed gratings where we systematically manipulated contingent probability. Participants rapidly developed faster and more precise estimations for high-probability tilts. The shapes of their error distributions, as indexed by a kurtosis measure, also showed a distortion from Gaussian. This kurtosis metric was robust, capturing probability effects that were graded, contextual, and varying as a function of stimulus orientation. Our data can be understood as a probability-induced reduction in the variability or "shape" of estimation errors, as would be expected if probability affects the perceptual representations. As probability manipulations are an implicit component of many endogenous cuing paradigms, changes at the perceptual level could account for changes in performance that might have traditionally been ascribed to "attention." (c) 2015 APA, all rights reserved).
Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James
2010-10-01
The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.
Probability of success for phase III after exploratory biomarker analysis in phase II.
Götte, Heiko; Kirchner, Marietta; Sailer, Martin Oliver
2017-05-01
The probability of success or average power describes the potential of a future trial by weighting the power with a probability distribution of the treatment effect. The treatment effect estimate from a previous trial can be used to define such a distribution. During the development of targeted therapies, it is common practice to look for predictive biomarkers. The consequence is that the trial population for phase III is often selected on the basis of the most extreme result from phase II biomarker subgroup analyses. In such a case, there is a tendency to overestimate the treatment effect. We investigate whether the overestimation of the treatment effect estimate from phase II is transformed into a positive bias for the probability of success for phase III. We simulate a phase II/III development program for targeted therapies. This simulation allows to investigate selection probabilities and allows to compare the estimated with the true probability of success. We consider the estimated probability of success with and without subgroup selection. Depending on the true treatment effects, there is a negative bias without selection because of the weighting by the phase II distribution. In comparison, selection increases the estimated probability of success. Thus, selection does not lead to a bias in probability of success if underestimation due to the phase II distribution and overestimation due to selection cancel each other out. We recommend to perform similar simulations in practice to get the necessary information about the risk and chances associated with such subgroup selection designs. Copyright © 2017 John Wiley & Sons, Ltd.
Effects of preparation time and trial type probability on performance of anti- and pro-saccades.
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. Copyright © 2016 Elsevier B.V. All rights reserved.
P value and the theory of hypothesis testing: an explanation for new researchers.
Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël
2010-03-01
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
IUE observations of the quasar 3C 273. [International Ultraviolet Explorer
NASA Technical Reports Server (NTRS)
Boggess, A.; Daltabuit, E.; Torres-Peimbert, S.; Estabrook, F. B.; Wahlquist, H. D.; Lane, A. L.; Green, R.; Oke, J. B.; Schmidt, M.; Zimmerman, B.
1979-01-01
IUE observations indicate that the spectrum of 3C 273 is similar to that of other large-redshift quasars. There is a large excess of flux in the range 2400 A to 5300 A, which encompasses the Balmer jump region but which does not appear to be explainable by Balmer emission. The intensity ratio of Lyman-alpha to H-beta is 5.5, in agreement with other measures and a factor 6 smaller than the recombination value. The only absorption lines in the spectrum are due to our Galaxy. There is marginal evidence for a depression of the continuum shortward of the Lyman-alpha emission line, but the errors are too large to warrant any conclusion that 3C 273 has a rich absorption-line spectrum such as that seen in large-redshift quasars. The absence of emission and absorption lines of Fe II leads to the conclusion that resonance fluorescence probably produces the visual Fe II emission lines.
Estimating parameters for probabilistic linkage of privacy-preserved datasets.
Brown, Adrian P; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Boyd, James H
2017-07-10
Probabilistic record linkage is a process used to bring together person-based records from within the same dataset (de-duplication) or from disparate datasets using pairwise comparisons and matching probabilities. The linkage strategy and associated match probabilities are often estimated through investigations into data quality and manual inspection. However, as privacy-preserved datasets comprise encrypted data, such methods are not possible. In this paper, we present a method for estimating the probabilities and threshold values for probabilistic privacy-preserved record linkage using Bloom filters. Our method was tested through a simulation study using synthetic data, followed by an application using real-world administrative data. Synthetic datasets were generated with error rates from zero to 20% error. Our method was used to estimate parameters (probabilities and thresholds) for de-duplication linkages. Linkage quality was determined by F-measure. Each dataset was privacy-preserved using separate Bloom filters for each field. Match probabilities were estimated using the expectation-maximisation (EM) algorithm on the privacy-preserved data. Threshold cut-off values were determined by an extension to the EM algorithm allowing linkage quality to be estimated for each possible threshold. De-duplication linkages of each privacy-preserved dataset were performed using both estimated and calculated probabilities. Linkage quality using the F-measure at the estimated threshold values was also compared to the highest F-measure. Three large administrative datasets were used to demonstrate the applicability of the probability and threshold estimation technique on real-world data. Linkage of the synthetic datasets using the estimated probabilities produced an F-measure that was comparable to the F-measure using calculated probabilities, even with up to 20% error. Linkage of the administrative datasets using estimated probabilities produced an F-measure that was higher than the F-measure using calculated probabilities. Further, the threshold estimation yielded results for F-measure that were only slightly below the highest possible for those probabilities. The method appears highly accurate across a spectrum of datasets with varying degrees of error. As there are few alternatives for parameter estimation, the approach is a major step towards providing a complete operational approach for probabilistic linkage of privacy-preserved datasets.
Butler, Troy; Wildey, Timothy
2018-01-01
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, Troy; Wildey, Timothy
In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less
Relation between minimum-error discrimination and optimum unambiguous discrimination
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun
2010-09-15
In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Lee, Chanseok; Lee, Jae Young; Kim, Do-Nyun
2018-02-07
The originally published version of this Article contained an error in Figure 5. In panel f, the right y-axis 'Strain energy (kbT)' was labelled 'Probability' and the left y-axis 'Probability' was labelled 'Strain energy (kbT)'. This error has now been corrected in both the PDF and HTML versions of the Article.
NASA Technical Reports Server (NTRS)
Elyasberg, P. Y.
1979-01-01
The shortcomings of the classical approach are set forth, and the newer methods resulting from these shortcomings are explained. The problem was approached with the assumption that the probabilities of error were known, as well as without knowledge of the distribution of the probabilities of error. The advantages of the newer approach are discussed.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, PC †, have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum PC. If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but potentially useful Pc upper bound.
Metrics for Business Process Models
NASA Astrophysics Data System (ADS)
Mendling, Jan
Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.
Judgment under Uncertainty: Heuristics and Biases.
Tversky, A; Kahneman, D
1974-09-27
This article described three heuristics that are employed in making judgements under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Quantum state discrimination bounds for finite sample size
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audenaert, Koenraad M. R.; Mosonyi, Milan; Mathematical Institute, Budapest University of Technology and Economics, Egry Jozsef u 1., Budapest 1111
2012-12-15
In the problem of quantum state discrimination, one has to determine by measurements the state of a quantum system, based on the a priori side information that the true state is one of the two given and completely known states, {rho} or {sigma}. In general, it is not possible to decide the identity of the true state with certainty, and the optimal measurement strategy depends on whether the two possible errors (mistaking {rho} for {sigma}, or the other way around) are treated as of equal importance or not. Results on the quantum Chernoff and Hoeffding bounds and the quantum Stein'smore » lemma show that, if several copies of the system are available then the optimal error probabilities decay exponentially in the number of copies, and the decay rate is given by a certain statistical distance between {rho} and {sigma} (the Chernoff distance, the Hoeffding distances, and the relative entropy, respectively). While these results provide a complete solution to the asymptotic problem, they are not completely satisfying from a practical point of view. Indeed, in realistic scenarios one has access only to finitely many copies of a system, and therefore it is desirable to have bounds on the error probabilities for finite sample size. In this paper we provide finite-size bounds on the so-called Stein errors, the Chernoff errors, the Hoeffding errors, and the mixed error probabilities related to the Chernoff and the Hoeffding errors.« less
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
The NASA F-15 Intelligent Flight Control Systems: Generation II
NASA Technical Reports Server (NTRS)
Buschbacher, Mark; Bosworth, John
2006-01-01
The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
Wilkinson, Michael
2014-03-01
Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks
Probability of misclassifying biological elements in surface waters.
Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna
2017-11-24
Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Peppas, Kostas P; Lazarakis, Fotis; Alexandridis, Antonis; Dangakis, Kostas
2012-08-01
In this Letter we investigate the error performance of multiple-input multiple-output free-space optical communication systems employing intensity modulation/direct detection and operating over strong atmospheric turbulence channels. Atmospheric-induced strong turbulence fading is modeled using the negative exponential distribution. For the considered system, an approximate yet accurate analytical expression for the average bit error probability is derived and an efficient method for its numerical evaluation is proposed. Numerically evaluated and computer simulation results are further provided to demonstrate the validity of the proposed mathematical analysis.
Transition probabilities of Br II
NASA Technical Reports Server (NTRS)
Bengtson, R. D.; Miller, M. H.
1976-01-01
Absolute transition probabilities of the three most prominent visible Br II lines are measured in emission. Results compare well with Coulomb approximations and with line strengths extrapolated from trends in homologous atoms.
Quantifying seining detection probability for fishes of Great Plains sand‐bed rivers
Mollenhauer, Robert; Logue, Daniel R.; Brewer, Shannon K.
2018-01-01
Species detection error (i.e., imperfect and variable detection probability) is an essential consideration when investigators map distributions and interpret habitat associations. When fish detection error that is due to highly variable instream environments needs to be addressed, sand‐bed streams of the Great Plains represent a unique challenge. We quantified seining detection probability for diminutive Great Plains fishes across a range of sampling conditions in two sand‐bed rivers in Oklahoma. Imperfect detection resulted in underestimates of species occurrence using naïve estimates, particularly for less common fishes. Seining detection probability also varied among fishes and across sampling conditions. We observed a quadratic relationship between water depth and detection probability, in which the exact nature of the relationship was species‐specific and dependent on water clarity. Similarly, the direction of the relationship between water clarity and detection probability was species‐specific and dependent on differences in water depth. The relationship between water temperature and detection probability was also species dependent, where both the magnitude and direction of the relationship varied among fishes. We showed how ignoring detection error confounded an underlying relationship between species occurrence and water depth. Despite imperfect and heterogeneous detection, our results support that determining species absence can be accomplished with two to six spatially replicated seine hauls per 200‐m reach under average sampling conditions; however, required effort would be higher under certain conditions. Detection probability was low for the Arkansas River Shiner Notropis girardi, which is federally listed as threatened, and more than 10 seine hauls per 200‐m reach would be required to assess presence across sampling conditions. Our model allows scientists to estimate sampling effort to confidently assess species occurrence, which maximizes the use of available resources. Increased implementation of approaches that consider detection error promote ecological advancements and conservation and management decisions that are better informed.
A method to compute SEU fault probabilities in memory arrays with error correction
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.
Multiple statistical tests: Lessons from a d20.
Madan, Christopher R
2016-01-01
Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.
The price of complexity in financial networks
NASA Astrophysics Data System (ADS)
Battiston, Stefano; Caldarelli, Guido; May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.
2016-09-01
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.
The price of complexity in financial networks.
Battiston, Stefano; Caldarelli, Guido; May, Robert M; Roukny, Tarik; Stiglitz, Joseph E
2016-09-06
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Permanence analysis of a concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.; Kasami, T.
1983-01-01
A concatenated coding scheme for error control in data communications is analyzed. In this scheme, the inner code is used for both error correction and detection, however, the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. Probability of undetected error is derived and bounded. A particular example, proposed for the planetary program, is analyzed.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Murray, Thomas A; Yuan, Ying; Thall, Peter F; Elizondo, Joan H; Hofstetter, Wayne L
2018-01-22
A design is proposed for randomized comparative trials with ordinal outcomes and prognostic subgroups. The design accounts for patient heterogeneity by allowing possibly different comparative conclusions within subgroups. The comparative testing criterion is based on utilities for the levels of the ordinal outcome and a Bayesian probability model. Designs based on two alternative models that include treatment-subgroup interactions are considered, the proportional odds model and a non-proportional odds model with a hierarchical prior that shrinks toward the proportional odds model. A third design that assumes homogeneity and ignores possible treatment-subgroup interactions also is considered. The three approaches are applied to construct group sequential designs for a trial of nutritional prehabilitation versus standard of care for esophageal cancer patients undergoing chemoradiation and surgery, including both untreated patients and salvage patients whose disease has recurred following previous therapy. A simulation study is presented that compares the three designs, including evaluation of within-subgroup type I and II error probabilities under a variety of scenarios including different combinations of treatment-subgroup interactions. © 2018, The International Biometric Society.
Conroy, M.J.; Samuel, M.D.; White, Joanne C.
1995-01-01
Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
The Performance of Noncoherent Orthogonal M-FSK in the Presence of Timing and Frequency Errors
NASA Technical Reports Server (NTRS)
Hinedi, Sami; Simon, Marvin K.; Raphaeli, Dan
1993-01-01
Practical M-FSK systems experience a combination of time and frequency offsets (errors). This paper assesses the deleterious effect of these offsets, first individually and then combined, on the average bit error probability performance of the system.
Performance analysis of a cascaded coding scheme with interleaved outer code
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A cascaded coding scheme for a random error channel with a bit-error rate is analyzed. In this scheme, the inner code C sub 1 is an (n sub 1, m sub 1l) binary linear block code which is designed for simultaneous error correction and detection. The outer code C sub 2 is a linear block code with symbols from the Galois field GF (2 sup l) which is designed for correcting both symbol errors and erasures, and is interleaved with a degree m sub 1. A procedure for computing the probability of a correct decoding is presented and an upper bound on the probability of a decoding error is derived. The bound provides much better results than the previous bound for a cascaded coding scheme with an interleaved outer code. Example schemes with inner codes ranging from high rates to very low rates are evaluated. Several schemes provide extremely high reliability even for very high bit-error rates say 10 to the -1 to 10 to the -2 power.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Steiner, Markus J.; Lopez, Laureen M.; Grimes, David A.; Cheng, Linan; Shelton, Jim; Trussell, James; Farley, Timothy M.M.; Dorflinger, Laneta
2013-01-01
Background Sino-implant (II) is a subdermal contraceptive implant manufactured in China. This two-rod levonorgestrel-releasing implant has the same amount of active ingredient (150 mg levonorgestrel) and mechanism of action as the widely available contraceptive implant Jadelle. We examined randomized controlled trials of Sino-implant (II) for effectiveness and side effects. Study design We searched electronic databases for studies of Sino-implant (II), and then restricted our review to randomized controlled trials. The primary outcome of this review was pregnancy. Results Four randomized trials with a total of 15,943 women assigned to Sino-implant (II) had first-year probabilities of pregnancy ranging from 0.0% to 0.1%. Cumulative probabilities of pregnancy during the four years of the product's approved duration of use were 0.9% and 1.06% in the two trials that presented date for four-year use. Five-year cumulative probabilities of pregnancy ranged from 0.7% to 2.1%. In one trial, the cumulative probability of pregnancy more than doubled during the fifth year (from 0.9% to 2.1%), which may be why the implant is approved for four years of use in China. Five-year cumulative probabilities of discontinuation due to menstrual problems ranged from 12.5% to 15.5% for Sino-implant (II). Conclusions Sino-implant (II) is one of the most effective contraceptives available today. These available clinical data, combined with independent laboratory testing, and the knowledge that 7 million women have used this method since 1994, support the safety and effectiveness of Sino-implant (II). The lower cost of Sino-implant (II) compared with other subdermal implants could improve access to implants in resource-constrained settings. PMID:20159174
Meijer, Erik; Rohwedder, Susann; Wansbeek, Tom
2012-01-01
Survey data on earnings tend to contain measurement error. Administrative data are superior in principle, but they are worthless in case of a mismatch. We develop methods for prediction in mixture factor analysis models that combine both data sources to arrive at a single earnings figure. We apply the methods to a Swedish data set. Our results show that register earnings data perform poorly if there is a (small) probability of a mismatch. Survey earnings data are more reliable, despite their measurement error. Predictors that combine both and take conditional class probabilities into account outperform all other predictors.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush
2008-01-01
We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.
Cost of remembering a bit of information
NASA Astrophysics Data System (ADS)
Chiuchiù; , D.; López-Suárez, M.; Neri, I.; Diamantini, M. C.; Gammaitoni, L.
2018-05-01
In 1961, Landauer [R. Landauer, IBM J. Res. Develop. 5, 183 (1961), 10.1147/rd.53.0183] pointed out that resetting a binary memory requires a minimum energy of kBT ln(2 ) . However, once written, any memory is doomed to lose its content if no action is taken. To avoid memory losses, a refresh procedure is periodically performed. We present a theoretical model and an experiment on a microelectromechanical system to evaluate the minimum energy required to preserve one bit of information over time. Two main conclusions are drawn: (i) in principle, the energetic cost to preserve information for a fixed time duration with a given error probability can be arbitrarily reduced if the refresh procedure is performed often enough, and (ii) the Heisenberg uncertainty principle sets an upper bound on the memory lifetime.
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2010 CFR
2010-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2011 CFR
2011-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
The price of complexity in financial networks
May, Robert M.; Roukny, Tarik; Stiglitz, Joseph E.
2016-01-01
Financial institutions form multilayer networks by engaging in contracts with each other and by holding exposures to common assets. As a result, the default probability of one institution depends on the default probability of all of the other institutions in the network. Here, we show how small errors on the knowledge of the network of contracts can lead to large errors in the probability of systemic defaults. From the point of view of financial regulators, our findings show that the complexity of financial networks may decrease the ability to mitigate systemic risk, and thus it may increase the social cost of financial crises. PMID:27555583
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
NASA Technical Reports Server (NTRS)
Gejji, Raghvendra, R.
1992-01-01
Network transmission errors such as collisions, CRC errors, misalignment, etc. are statistical in nature. Although errors can vary randomly, a high level of errors does indicate specific network problems, e.g. equipment failure. In this project, we have studied the random nature of collisions theoretically as well as by gathering statistics, and established a numerical threshold above which a network problem is indicated with high probability.
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Chance-Constrained Missile-Procurement and Deployment Models for Naval Surface Warfare
2005-03-01
II iv . Missions in period II are assigned so that period-II scenarios are satisfied with a user-specified probability IIsp , which may depend on...feasible solution of RFFAM will successfully cover the period-II demands with probability IIsp if the MAP is followed (see Corollary 1 in Chapter...given set of scenarios D , and in particular any IIsp -feasible subset. Therefore, assume that the remainder vectors ˆsjr are listed in non-increasing
Thematic accuracy of the NLCD 2001 land cover for the conterminous United States
Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.
2010-01-01
The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.
BOP2: Bayesian optimal design for phase II clinical trials with simple and complex endpoints.
Zhou, Heng; Lee, J Jack; Yuan, Ying
2017-09-20
We propose a flexible Bayesian optimal phase II (BOP2) design that is capable of handling simple (e.g., binary) and complicated (e.g., ordinal, nested, and co-primary) endpoints under a unified framework. We use a Dirichlet-multinomial model to accommodate different types of endpoints. At each interim, the go/no-go decision is made by evaluating a set of posterior probabilities of the events of interest, which is optimized to maximize power or minimize the number of patients under the null hypothesis. Unlike other existing Bayesian designs, the BOP2 design explicitly controls the type I error rate, thereby bridging the gap between Bayesian designs and frequentist designs. In addition, the stopping boundary of the BOP2 design can be enumerated prior to the onset of the trial. These features make the BOP2 design accessible to a wide range of users and regulatory agencies and particularly easy to implement in practice. Simulation studies show that the BOP2 design has favorable operating characteristics with higher power and lower risk of incorrectly terminating the trial than some existing Bayesian phase II designs. The software to implement the BOP2 design is freely available at www.trialdesign.org. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
NASA Astrophysics Data System (ADS)
Chodera, John D.; Noé, Frank
2010-09-01
Discrete-state Markov (or master equation) models provide a useful simplified representation for characterizing the long-time statistical evolution of biomolecules in a manner that allows direct comparison with experiments as well as the elucidation of mechanistic pathways for an inherently stochastic process. A vital part of meaningful comparison with experiment is the characterization of the statistical uncertainty in the predicted experimental measurement, which may take the form of an equilibrium measurement of some spectroscopic signal, the time-evolution of this signal following a perturbation, or the observation of some statistic (such as the correlation function) of the equilibrium dynamics of a single molecule. Without meaningful error bars (which arise from both approximation and statistical error), there is no way to determine whether the deviations between model and experiment are statistically meaningful. Previous work has demonstrated that a Bayesian method that enforces microscopic reversibility can be used to characterize the statistical component of correlated uncertainties in state-to-state transition probabilities (and functions thereof) for a model inferred from molecular simulation data. Here, we extend this approach to include the uncertainty in observables that are functions of molecular conformation (such as surrogate spectroscopic signals) characterizing each state, permitting the full statistical uncertainty in computed spectroscopic experiments to be assessed. We test the approach in a simple model system to demonstrate that the computed uncertainties provide a useful indicator of statistical variation, and then apply it to the computation of the fluorescence autocorrelation function measured for a dye-labeled peptide previously studied by both experiment and simulation.
Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke
2011-08-01
TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Hybrid computer technique yields random signal probability distributions
NASA Technical Reports Server (NTRS)
Cameron, W. D.
1965-01-01
Hybrid computer determines the probability distributions of instantaneous and peak amplitudes of random signals. This combined digital and analog computer system reduces the errors and delays of manual data analysis.
Quantum-state comparison and discrimination
NASA Astrophysics Data System (ADS)
Hayashi, A.; Hashimoto, T.; Horibe, M.
2018-05-01
We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.
Yau, Joanna Oi-Yue; McNally, Gavan P
2015-01-07
Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.
Probability Theory, Not the Very Guide of Life
ERIC Educational Resources Information Center
Juslin, Peter; Nilsson, Hakan; Winman, Anders
2009-01-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive…
Development of a Methodology to Optimally Allocate Visual Inspection Time
1989-06-01
Model and then takes into account the costs of the errors. The purpose of the Alternative Model is to not make 104 costly mistakes while meeting the...James Buck, and Virgil Anderson, AIIE Transactions, Volume 11, No.4, December 1979. 26. "Inspection of Sheet Materials - Model and Data", Colin G. Drury ...worker error, the probability of inspector error, and the cost of system error. Paired comparisons of error phenomena from operational personnel are
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
On the Discriminant Analysis in the 2-Populations Case
NASA Astrophysics Data System (ADS)
Rublík, František
2008-01-01
The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Conflict Probability Estimation for Free Flight
NASA Technical Reports Server (NTRS)
Paielli, Russell A.; Erzberger, Heinz
1996-01-01
The safety and efficiency of free flight will benefit from automated conflict prediction and resolution advisories. Conflict prediction is based on trajectory prediction and is less certain the farther in advance the prediction, however. An estimate is therefore needed of the probability that a conflict will occur, given a pair of predicted trajectories and their levels of uncertainty. A method is developed in this paper to estimate that conflict probability. The trajectory prediction errors are modeled as normally distributed, and the two error covariances for an aircraft pair are combined into a single equivalent covariance of the relative position. A coordinate transformation is then used to derive an analytical solution. Numerical examples and Monte Carlo validation are presented.
Memorabeatlia: a naturalistic study of long-term memory.
Hyman, I E; Rubin, D C
1990-03-01
Seventy-six undergraduates were given the titles and first lines of Beatles' songs and asked to recall the songs. Seven hundred and four different undergraduates were cued with one line from each of 25 Beatles' songs and asked to recall the title. The probability of recalling a line was best predicted by the number of times a line was repeated in the song and how early the line first appeared in the song. The probability of cuing to the title was best predicted by whether the line shared words with the title. Although the subjects recalled only 21% of the lines, there were very few errors in recall, and the errors rarely violated the rhythmic, poetic, or thematic constraints of the songs. Acting together, these constraints can account for the near verbatim recall observed. Fourteen subjects, who transcribed one song, made fewer and different errors than the subjects who had recalled the song, indicating that the errors in recall were not primarily the result of errors in encoding.
The ranking probability approach and its usage in design and analysis of large-scale studies.
Kuo, Chia-Ling; Zaykin, Dmitri
2013-01-01
In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.
Optimizer convergence and local minima errors and their clinical importance
NASA Astrophysics Data System (ADS)
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.
2003-09-01
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Optimizer convergence and local minima errors and their clinical importance.
Jeraj, Robert; Wu, Chuan; Mackie, Thomas R
2003-09-07
Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.
Fusion of Scores in a Detection Context Based on Alpha Integration.
Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson
2015-09-01
We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.
Learn-as-you-go acceleration of cosmological parameter estimates
NASA Astrophysics Data System (ADS)
Aslanyan, Grigor; Easther, Richard; Price, Layne C.
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.
Learn-as-you-go acceleration of cosmological parameter estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz
2015-09-01
Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less
Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes
NASA Astrophysics Data System (ADS)
Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team
We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.
Inference of emission rates from multiple sources using Bayesian probability theory.
Yee, Eugene; Flesch, Thomas K
2010-03-01
The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
A multistate dynamic site occupancy model for spatially aggregated sessile communities
Fukaya, Keiichi; Royle, J. Andrew; Okuda, Takehiro; Nakaoka, Masahiro; Noda, Takashi
2017-01-01
Estimation of transition probabilities of sessile communities seems easy in principle but may still be difficult in practice because resampling error (i.e. a failure to resample exactly the same location at fixed points) may cause significant estimation bias. Previous studies have developed novel analytical methods to correct for this estimation bias. However, they did not consider the local structure of community composition induced by the aggregated distribution of organisms that is typically observed in sessile assemblages and is very likely to affect observations.We developed a multistate dynamic site occupancy model to estimate transition probabilities that accounts for resampling errors associated with local community structure. The model applies a nonparametric multivariate kernel smoothing methodology to the latent occupancy component to estimate the local state composition near each observation point, which is assumed to determine the probability distribution of data conditional on the occurrence of resampling error.By using computer simulations, we confirmed that an observation process that depends on local community structure may bias inferences about transition probabilities. By applying the proposed model to a real data set of intertidal sessile communities, we also showed that estimates of transition probabilities and of the properties of community dynamics may differ considerably when spatial dependence is taken into account.Results suggest the importance of accounting for resampling error and local community structure for developing management plans that are based on Markovian models. Our approach provides a solution to this problem that is applicable to broad sessile communities. It can even accommodate an anisotropic spatial correlation of species composition, and may also serve as a basis for inferring complex nonlinear ecological dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Albin, E-mail: albin.fredriksson@raysearchlabs.com; Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goalsmore » to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.« less
On the Determinants of the Conjunction Fallacy: Probability versus Inductive Confirmation
ERIC Educational Resources Information Center
Tentori, Katya; Crupi, Vincenzo; Russo, Selena
2013-01-01
Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such…
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
ERIC Educational Resources Information Center
Wang, Lin
The literature is reviewed regarding the difference between planned contrasts, OVA and unplanned contrasts. The relationship between statistical power of a test method and Type I, Type II error rates is first explored to provide a framework for the discussion. The concepts and formulation of contrast, orthogonal and non-orthogonal contrasts are…
Error Patterns in Ordering Fractions among At-Risk Fourth-Grade Students
ERIC Educational Resources Information Center
Malone, Amelia S.; Fuchs, Lynn S.
2017-01-01
The three purposes of this study were to (a) describe fraction ordering errors among at-risk fourth grade students, (b) assess the effect of part-whole understanding and accuracy of fraction magnitude estimation on the probability of committing errors, and (c) examine the effect of students' ability to explain comparing problems on the probability…
Bayesian Redshift Classification of Emission-line Galaxies with Photometric Equivalent Widths
NASA Astrophysics Data System (ADS)
Leung, Andrew S.; Acquaviva, Viviana; Gawiser, Eric; Ciardullo, Robin; Komatsu, Eiichiro; Malz, A. I.; Zeimann, Gregory R.; Bridge, Joanna S.; Drory, Niv; Feldmeier, John J.; Finkelstein, Steven L.; Gebhardt, Karl; Gronwall, Caryl; Hagen, Alex; Hill, Gary J.; Schneider, Donald P.
2017-07-01
We present a Bayesian approach to the redshift classification of emission-line galaxies when only a single emission line is detected spectroscopically. We consider the case of surveys for high-redshift Lyα-emitting galaxies (LAEs), which have traditionally been classified via an inferred rest-frame equivalent width (EW {W}{Lyα }) greater than 20 Å. Our Bayesian method relies on known prior probabilities in measured emission-line luminosity functions and EW distributions for the galaxy populations, and returns the probability that an object in question is an LAE given the characteristics observed. This approach will be directly relevant for the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX), which seeks to classify ˜106 emission-line galaxies into LAEs and low-redshift [{{O}} {{II}}] emitters. For a simulated HETDEX catalog with realistic measurement noise, our Bayesian method recovers 86% of LAEs missed by the traditional {W}{Lyα } > 20 Å cutoff over 2 < z < 3, outperforming the EW cut in both contamination and incompleteness. This is due to the method’s ability to trade off between the two types of binary classification error by adjusting the stringency of the probability requirement for classifying an observed object as an LAE. In our simulations of HETDEX, this method reduces the uncertainty in cosmological distance measurements by 14% with respect to the EW cut, equivalent to recovering 29% more cosmological information. Rather than using binary object labels, this method enables the use of classification probabilities in large-scale structure analyses. It can be applied to narrowband emission-line surveys as well as upcoming large spectroscopic surveys including Euclid and WFIRST.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.; Marino, J. T., Jr.
1974-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-emperical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. Bit error probabilities for non-optimum threshold detection system were also investigated.
Threshold detection in an on-off binary communications channel with atmospheric scintillation
NASA Technical Reports Server (NTRS)
Webb, W. E.
1975-01-01
The optimum detection threshold in an on-off binary optical communications system operating in the presence of atmospheric turbulence was investigated assuming a poisson detection process and log normal scintillation. The dependence of the probability of bit error on log amplitude variance and received signal strength was analyzed and semi-empirical relationships to predict the optimum detection threshold derived. On the basis of this analysis a piecewise linear model for an adaptive threshold detection system is presented. The bit error probabilities for nonoptimum threshold detection systems were also investigated.
NASA Astrophysics Data System (ADS)
Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.
Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.
Effects of structural error on the estimates of parameters of dynamical systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Zimmerman, Dale L; Fang, Xiangming; Mazumdar, Soumya; Rushton, Gerard
2007-01-10
The assignment of a point-level geocode to subjects' residences is an important data assimilation component of many geographic public health studies. Often, these assignments are made by a method known as automated geocoding, which attempts to match each subject's address to an address-ranged street segment georeferenced within a streetline database and then interpolate the position of the address along that segment. Unfortunately, this process results in positional errors. Our study sought to model the probability distribution of positional errors associated with automated geocoding and E911 geocoding. Positional errors were determined for 1423 rural addresses in Carroll County, Iowa as the vector difference between each 100%-matched automated geocode and its true location as determined by orthophoto and parcel information. Errors were also determined for 1449 60%-matched geocodes and 2354 E911 geocodes. Huge (> 15 km) outliers occurred among the 60%-matched geocoding errors; outliers occurred for the other two types of geocoding errors also but were much smaller. E911 geocoding was more accurate (median error length = 44 m) than 100%-matched automated geocoding (median error length = 168 m). The empirical distributions of positional errors associated with 100%-matched automated geocoding and E911 geocoding exhibited a distinctive Greek-cross shape and had many other interesting features that were not capable of being fitted adequately by a single bivariate normal or t distribution. However, mixtures of t distributions with two or three components fit the errors very well. Mixtures of bivariate t distributions with few components appear to be flexible enough to fit many positional error datasets associated with geocoding, yet parsimonious enough to be feasible for nascent applications of measurement-error methodology to spatial epidemiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Wang, Ping; Liu, Xiaoxia; Cao, Tian; Fu, Huihua; Wang, Ranran; Guo, Lixin
2016-09-20
The impact of nonzero boresight pointing errors on the system performance of decode-and-forward protocol-based multihop parallel optical wireless communication systems is studied. For the aggregated fading channel, the atmospheric turbulence is simulated by an exponentiated Weibull model, and pointing errors are described by one recently proposed statistical model including both boresight and jitter. The binary phase-shift keying subcarrier intensity modulation-based analytical average bit error rate (ABER) and outage probability expressions are achieved for a nonidentically and independently distributed system. The ABER and outage probability are then analyzed with different turbulence strengths, receiving aperture sizes, structure parameters (P and Q), jitter variances, and boresight displacements. The results show that aperture averaging offers almost the same system performance improvement with boresight included or not, despite the values of P and Q. The performance enhancement owing to the increase of cooperative path (P) is more evident with nonzero boresight than that with zero boresight (jitter only), whereas the performance deterioration because of the increasing hops (Q) with nonzero boresight is almost the same as that with zero boresight. Monte Carlo simulation is offered to verify the validity of ABER and outage probability expressions.
Genetic Algorithm-Based Motion Estimation Method using Orientations and EMGs for Robot Controls
Chae, Jeongsook; Jin, Yong; Sung, Yunsick
2018-01-01
Demand for interactive wearable devices is rapidly increasing with the development of smart devices. To accurately utilize wearable devices for remote robot controls, limited data should be analyzed and utilized efficiently. For example, the motions by a wearable device, called Myo device, can be estimated by measuring its orientation, and calculating a Bayesian probability based on these orientation data. Given that Myo device can measure various types of data, the accuracy of its motion estimation can be increased by utilizing these additional types of data. This paper proposes a motion estimation method based on weighted Bayesian probability and concurrently measured data, orientations and electromyograms (EMG). The most probable motion among estimated is treated as a final estimated motion. Thus, recognition accuracy can be improved when compared to the traditional methods that employ only a single type of data. In our experiments, seven subjects perform five predefined motions. When orientation is measured by the traditional methods, the sum of the motion estimation errors is 37.3%; likewise, when only EMG data are used, the error in motion estimation by the proposed method was also 37.3%. The proposed combined method has an error of 25%. Therefore, the proposed method reduces motion estimation errors by 12%. PMID:29324641
Matheny, Michael E; Normand, Sharon-Lise T; Gross, Thomas P; Marinac-Dabic, Danica; Loyo-Berrios, Nilsa; Vidi, Venkatesan D; Donnelly, Sharon; Resnic, Frederic S
2011-12-14
Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery. A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year. We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80). The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.
Modeling habitat dynamics accounting for possible misclassification
Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.
2012-01-01
Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Estimating alarm thresholds and the number of components in mixture distributions
NASA Astrophysics Data System (ADS)
Burr, Tom; Hamada, Michael S.
2012-09-01
Mixtures of probability distributions arise in many nuclear assay and forensic applications, including nuclear weapon detection, neutron multiplicity counting, and in solution monitoring (SM) for nuclear safeguards. SM data is increasingly used to enhance nuclear safeguards in aqueous reprocessing facilities having plutonium in solution form in many tanks. This paper provides background for mixture probability distributions and then focuses on mixtures arising in SM data. SM data can be analyzed by evaluating transfer-mode residuals defined as tank-to-tank transfer differences, and wait-mode residuals defined as changes during non-transfer modes. A previous paper investigated impacts on transfer-mode and wait-mode residuals of event marking errors which arise when the estimated start and/or stop times of tank events such as transfers are somewhat different from the true start and/or stop times. Event marking errors contribute to non-Gaussian behavior and larger variation than predicted on the basis of individual tank calibration studies. This paper illustrates evidence for mixture probability distributions arising from such event marking errors and from effects such as condensation or evaporation during non-transfer modes, and pump carryover during transfer modes. A quantitative assessment of the sample size required to adequately characterize a mixture probability distribution arising in any context is included.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Average BER and outage probability of the ground-to-train OWC link in turbulence with rain
NASA Astrophysics Data System (ADS)
Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da
2017-09-01
The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude. I.
NASA Astrophysics Data System (ADS)
Conn, A. R.; Lewis, G. F.; Ibata, R. A.; Parker, Q. A.; Zucker, D. B.; McConnachie, A. W.; Martin, N. F.; Irwin, M. J.; Tanvir, N.; Fardal, M. A.; Ferguson, A. M. N.
2011-10-01
We present a new approach for identifying the tip of the red giant branch (TRGB) which, as we show, works robustly even on sparsely populated targets. Moreover, the approach is highly adaptable to the available data for the stellar population under study, with prior information readily incorporable into the algorithm. The uncertainty in the derived distances is also made tangible and easily calculable from posterior probability distributions. We provide an outline of the development of the algorithm and present the results of tests designed to characterize its capabilities and limitations. We then apply the new algorithm to three M31 satellites: Andromeda I, Andromeda II, and the fainter Andromeda XXIII, using data from the Pan-Andromeda Archaeological Survey (PAndAS), and derive their distances as 731(+ 5) + 18 (- 4) - 17 kpc, 634(+ 2) + 15 (- 2) - 14 kpc, and 733(+ 13) + 23 (- 11) - 22 kpc, respectively, where the errors appearing in parentheses are the components intrinsic to the method, while the larger values give the errors after accounting for additional sources of error. These results agree well with the best distance determinations in the literature and provide the smallest uncertainties to date. This paper is an introduction to the workings and capabilities of our new approach in its basic form, while a follow-up paper shall make full use of the method's ability to incorporate priors and use the resulting algorithm to systematically obtain distances to all of M31's satellites identifiable in the PAndAS survey area.
NASA Technical Reports Server (NTRS)
De Muer, D.; De Backer, H.; Zawodny, J. M.; Veiga, R. E.
1990-01-01
The ozone profiles obtained from 24 balloon soundings at Uccle (50 deg 48 min N, 4 deg 21 min E) made with electrochemical ozonesondes were used as correlative data for SAGE II ozone profiles retrieved within a distance of at most 600 km from Uccle. The agreement between the two data sets is in general quite good, especially for profiles nearly coincident in time and space, and during periods of little dynamic activity over the area considered. The percent difference between the ozone column density of the mean balloon and SAGE profile is 4.4 percent (-3.3) percent in the altitude region between 10 and 26 km. From a statistical analysis it appears that there is a small but meaningful difference between the mean profiles at the level of the ozone maximum and around the 30-km level. An error analysis of both data sets give similar results, leading to the conclusion that these differences are instrumentally induced. However, differences between the mean profiles in the lower stratosphere are probably real and due to the high ozone variability in time and space in that altitude region.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Sustained attention deficits among HIV-positive individuals with comorbid bipolar disorder.
Posada, Carolina; Moore, David J; Deutsch, Reena; Rooney, Alexandra; Gouaux, Ben; Letendre, Scott; Grant, Igor; Atkinson, J Hampton
2012-01-01
Difficulties with sustained attention have been found among both persons with HIV infection (HIV+) and bipolar disorder (BD). The authors examined sustained attention among 39 HIV+ individuals with BD (HIV+/BD+) and 33 HIV-infected individuals without BD (HIV+/BD-), using the Conners' Continuous Performance Test-II (CPT-II). A Global Assessment of Functioning (GAF) score was also assigned to each participant as an overall indicator of daily functioning abilities. HIV+/BD+ participants had significantly worse performance on CPT-II omission errors, hit reaction time SE (Hit RT SE), variability of SE, and perseverations than HIV+/BD- participants. When examining CPT-II performance over the six study blocks, both HIV+/BD+ and HIV+/BD- participants evidenced worse performance on scores of commission errors and reaction times as the test progressed. The authors also examined the effect of current mood state (i.e., manic, depressive, euthymic) on CPT-II performance, but no significant differences were observed across the various mood states. HIV+/BD+ participants had significantly worse GAF scores than HIV+/BD- participants, which indicates poorer overall functioning in the dually-affected group; among HIV+/BD+ persons, significant negative correlations were found between GAF scores and CPT-II omission and commission errors, detectability, and perseverations, indicating a possible relationship between decrements in sustained attention and worse daily-functioning outcomes.
Quantum illumination for enhanced detection of Rayleigh-fading targets
NASA Astrophysics Data System (ADS)
Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.
2017-08-01
Quantum illumination (QI) is an entanglement-enhanced sensing system whose performance advantage over a comparable classical system survives its usage in an entanglement-breaking scenario plagued by loss and noise. In particular, QI's error-probability exponent for discriminating between equally likely hypotheses of target absence or presence is 6 dB higher than that of the optimum classical system using the same transmitted power. This performance advantage, however, presumes that the target return, when present, has known amplitude and phase, a situation that seldom occurs in light detection and ranging (lidar) applications. At lidar wavelengths, most target surfaces are sufficiently rough that their returns are speckled, i.e., they have Rayleigh-distributed amplitudes and uniformly distributed phases. QI's optical parametric amplifier receiver—which affords a 3 dB better-than-classical error-probability exponent for a return with known amplitude and phase—fails to offer any performance gain for Rayleigh-fading targets. We show that the sum-frequency generation receiver [Zhuang et al., Phys. Rev. Lett. 118, 040801 (2017), 10.1103/PhysRevLett.118.040801]—whose error-probability exponent for a nonfading target achieves QI's full 6 dB advantage over optimum classical operation—outperforms the classical system for Rayleigh-fading targets. In this case, QI's advantage is subexponential: its error probability is lower than the classical system's by a factor of 1 /ln(M κ ¯NS/NB) , when M κ ¯NS/NB≫1 , with M ≫1 being the QI transmitter's time-bandwidth product, NS≪1 its brightness, κ ¯ the target return's average intensity, and NB the background light's brightness.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
ERIC Educational Resources Information Center
Breaux, Kristina C.; Avitia, Maria; Koriakin, Taylor; Bray, Melissa A.; DeBiase, Emily; Courville, Troy; Pan, Xingyu; Witholt, Thomas; Grossman, Sandy
2017-01-01
This study investigated the relationship between specific cognitive patterns of strengths and weaknesses and the errors children make on oral language, reading, writing, spelling, and math subtests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants with scores from the KTEA-3 and either the Wechsler Intelligence…
The relationship between somatic and cognitive-affective depression symptoms and error-related ERP’s
Bridwell, David A.; Steele, Vaughn R.; Maurer, J. Michael; Kiehl, Kent A.; Calhoun, Vince D.
2014-01-01
Background The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck’s Depression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Methods Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERP’s were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. Results We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). Limitations These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. Conclusions These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. PMID:25451400
The relationship between somatic and cognitive-affective depression symptoms and error-related ERPs.
Bridwell, David A; Steele, Vaughn R; Maurer, J Michael; Kiehl, Kent A; Calhoun, Vince D
2015-02-01
The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck'sDepression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERPs were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D
2009-07-01
The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of 1) the birth and death model, 2) the single gene expression model, 3) the genetic toggle switch model, and 4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks. PMID:27105653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-04-22
The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEGs), we truncate the state space by limiting the total molecular copy numbers in each MEG. Wemore » further describe a theoretical framework for analysis of the truncation error in the steady-state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an overall error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we also introduce an a priori method to estimate the upper bound of its truncation error. This a priori estimate can be rapidly computed from reaction rates of the network, without the need of costly trial solutions of the dCME. As examples, we show results of applying our methods to the four stochastic networks of (1) the birth and death model, (2) the single gene expression model, (3) the genetic toggle switch model, and (4) the phage lambda bistable epigenetic switch model. We demonstrate how truncation errors and steady-state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate our theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large number of stochastic networks.« less
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
On the determinants of the conjunction fallacy: probability versus inductive confirmation.
Tentori, Katya; Crupi, Vincenzo; Russo, Selena
2013-02-01
Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such accounts with a different reading of the phenomenon based on the notion of inductive confirmation as defined by contemporary Bayesian theorists. Averaging rule hypotheses along with the random error model and many other existing proposals are shown to all imply that conjunction fallacy rates would rise as the perceived probability of the added conjunct does. By contrast, our account predicts that the conjunction fallacy depends on the added conjunct being perceived as inductively confirmed. Four studies are reported in which the judged probability versus confirmation of the added conjunct have been systematically manipulated and dissociated. The results consistently favor a confirmation-theoretic account of the conjunction fallacy against competing views. Our proposal is also discussed in connection with related issues in the study of human inductive reasoning. 2013 APA, all rights reserved
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
Legal consequences of the moral duty to report errors.
Hall, Jacqulyn Kay
2003-09-01
Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.
An extended Reed Solomon decoder design
NASA Technical Reports Server (NTRS)
Chen, J.; Owsley, P.; Purviance, J.
1991-01-01
It has previously been shown that the Reed-Solomon (RS) codes can correct errors beyond the Singleton and Rieger Bounds with an arbitrarily small probability of a miscorrect. That is, an (n,k) RS code can correct more than (n-k)/2 errors. An implementation of such an RS decoder is presented in this paper. An existing RS decoder, the AHA4010, is utilized in this work. This decoder is especially useful for errors which are patterned with a long burst plus some random errors.
NASA Astrophysics Data System (ADS)
Straus, D. M.
2007-12-01
The probability distribution (pdf) of errors is followed in identical twin studies using the COLA T63 AGCM, integrated with observed SST for 15 recent winters. 30 integrations per winter (for 15 winters) are available with initial errors that are extremely small. The evolution of the pdf is tested for multi-modality, and the results interpreted in terms of clusters / regimes found in: (a) the set of 15x30 integrations mentioned, and (b) a larger ensemble of 55x15 integrations made with the same GCM using the same SSTs. The mapping of pdf evolution and clusters is also carried out for each winter separately, using the clusters found in the 55-member ensemble for the same winter alone. This technique yields information on the change in regimes caused by different boundary forcing (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). Analysis of the growing errors in terms of baroclinic and barotropic components allows for interpretation of the corresponding instabilities.
Performance Analysis of an Inter-Relay Co-operation in FSO Communication System
NASA Astrophysics Data System (ADS)
Khanna, Himanshu; Aggarwal, Mona; Ahuja, Swaran
2018-04-01
In this work, we analyze the outage and error performance of a one-way inter-relay assisted free space optical link. The assumption of the absence of direct link between the source and destination node is being made for the analysis, and the feasibility of such system configuration is studied. We consider the influence of path loss, atmospheric turbulence and pointing error impairments, and investigate the effect of these parameters on the system performance. The turbulence-induced fading is modeled by independent but not necessarily identically distributed gamma-gamma fading statistics. The closed-form expressions for outage probability and probability of error are derived and illustrated by numerical plots. It is concluded that the absence of line of sight path between source and destination nodes does not lead to significant performance degradation. Moreover, for the system model under consideration, interconnected relaying provides better error performance than the non-interconnected relaying and dual-hop serial relaying techniques.
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Outage probability of a relay strategy allowing intra-link errors utilizing Slepian-Wolf theorem
NASA Astrophysics Data System (ADS)
Cheng, Meng; Anwar, Khoirul; Matsumoto, Tad
2013-12-01
In conventional decode-and-forward (DF) one-way relay systems, a data block received at the relay node is discarded, if the information part is found to have errors after decoding. Such errors are referred to as intra-link errors in this article. However, in a setup where the relay forwards data blocks despite possible intra-link errors, the two data blocks, one from the source node and the other from the relay node, are highly correlated because they were transmitted from the same source. In this article, we focus on the outage probability analysis of such a relay transmission system, where source-destination and relay-destination links, Link 1 and Link 2, respectively, are assumed to suffer from the correlated fading variation due to block Rayleigh fading. The intra-link is assumed to be represented by a simple bit-flipping model, where some of the information bits recovered at the relay node are the flipped version of their corresponding original information bits at the source. The correlated bit streams are encoded separately by the source and relay nodes, and transmitted block-by-block to a common destination using different time slots, where the information sequence transmitted over Link 2 may be a noise-corrupted interleaved version of the original sequence. The joint decoding takes place at the destination by exploiting the correlation knowledge of the intra-link (source-relay link). It is shown that the outage probability of the proposed transmission technique can be expressed by a set of double integrals over the admissible rate range, given by the Slepian-Wolf theorem, with respect to the probability density function ( pdf) of the instantaneous signal-to-noise power ratios (SNR) of Link 1 and Link 2. It is found that, with the Slepian-Wolf relay technique, so far as the correlation ρ of the complex fading variation is | ρ|<1, the 2nd order diversity can be achieved only if the two bit streams are fully correlated. This indicates that the diversity order exhibited in the outage curve converges to 1 when the bit streams are not fully correlated. Moreover, the Slepian-Wolf outage probability is proved to be smaller than that of the 2nd order maximum ratio combining (MRC) diversity, if the average SNRs of the two independent links are the same. Exact as well as asymptotic expressions of the outage probability are theoretically derived in the article. In addition, the theoretical outage results are compared with the frame-error-rate (FER) curves, obtained by a series of simulations for the Slepian-Wolf relay system based on bit-interleaved coded modulation with iterative detection (BICM-ID). It is shown that the FER curves exhibit the same tendency as the theoretical results.
The advanced receiver 2: Telemetry test results in CTA 21
NASA Technical Reports Server (NTRS)
Hinedi, S.; Bevan, R.; Marina, M.
1991-01-01
Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.
Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-06-01
The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.
Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L
2015-09-15
The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the assessment of uncertainty in patient geocodes, relative to existing geocoding related uncertainty metrics.
Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J
2014-06-01
We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, H; Chen, Z; Nath, R
Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
USDA-ARS?s Scientific Manuscript database
The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are highly expressed in the testis, suggesting an important role in testis biology. Gene coding errors prevent the production of GnRH-II and GnRHR-II in many species, but both genes are functional in swine. We have demo...
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain
ERIC Educational Resources Information Center
Nelson, Jonathan D.
2005-01-01
Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment,…
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
Circular Probable Error for Circular and Noncircular Gaussian Impacts
2012-09-01
1M simulated impacts ph(k)=mean(imp(:,1).^2+imp(:,2).^2<=CEP^2); % hit frequency on CEP end phit (j)=mean(ph...avg 100 hit frequencies to “incr n” end % GRAPHICS plot(i, phit ,’r-’); % error exponent versus Ph estimate
Theoretical Analysis of Rain Attenuation Probability
NASA Astrophysics Data System (ADS)
Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan
2007-07-01
Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
Wang, Jian; Shete, Sanjay
2011-11-01
We recently proposed a bias correction approach to evaluate accurate estimation of the odds ratio (OR) of genetic variants associated with a secondary phenotype, in which the secondary phenotype is associated with the primary disease, based on the original case-control data collected for the purpose of studying the primary disease. As reported in this communication, we further investigated the type I error probabilities and powers of the proposed approach, and compared the results to those obtained from logistic regression analysis (with or without adjustment for the primary disease status). We performed a simulation study based on a frequency-matching case-control study with respect to the secondary phenotype of interest. We examined the empirical distribution of the natural logarithm of the corrected OR obtained from the bias correction approach and found it to be normally distributed under the null hypothesis. On the basis of the simulation study results, we found that the logistic regression approaches that adjust or do not adjust for the primary disease status had low power for detecting secondary phenotype associated variants and highly inflated type I error probabilities, whereas our approach was more powerful for identifying the SNP-secondary phenotype associations and had better-controlled type I error probabilities. © 2011 Wiley Periodicals, Inc.
Harvey, Raymond A; Hayden, Jennifer D; Kamble, Pravin S; Bouchard, Jonathan R; Huang, Joanna C
2017-04-01
We compared methods to control bias and confounding in observational studies including inverse probability weighting (IPW) and stabilized IPW (sIPW). These methods often require iteration and post-calibration to achieve covariate balance. In comparison, entropy balance (EB) optimizes covariate balance a priori by calibrating weights using the target's moments as constraints. We measured covariate balance empirically and by simulation by using absolute standardized mean difference (ASMD), absolute bias (AB), and root mean square error (RMSE), investigating two scenarios: the size of the observed (exposed) cohort exceeds the target (unexposed) cohort and vice versa. The empirical application weighted a commercial health plan cohort to a nationally representative National Health and Nutrition Examination Survey target on the same covariates and compared average total health care cost estimates across methods. Entropy balance alone achieved balance (ASMD ≤ 0.10) on all covariates in simulation and empirically. In simulation scenario I, EB achieved the lowest AB and RMSE (13.64, 31.19) compared with IPW (263.05, 263.99) and sIPW (319.91, 320.71). In scenario II, EB outperformed IPW and sIPW with smaller AB and RMSE. In scenarios I and II, EB achieved the lowest mean estimate difference from the simulated population outcome ($490.05, $487.62) compared with IPW and sIPW, respectively. Empirically, only EB differed from the unweighted mean cost indicating IPW, and sIPW weighting was ineffective. Entropy balance demonstrated the bias-variance tradeoff achieving higher estimate accuracy, yet lower estimate precision, compared with IPW methods. EB weighting required no post-processing and effectively mitigated observed bias and confounding. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Koskas, M; Chereau, E; Ballester, M; Dubernard, G; Lécuru, F; Heitz, D; Mathevet, P; Marret, H; Querleu, D; Golfier, F; Leblanc, E; Luton, D; Rouzier, R; Daraï, E
2013-01-01
Background: We developed a nomogram based on five clinical and pathological characteristics to predict lymph-node (LN) metastasis with a high concordance probability in endometrial cancer. Sentinel LN (SLN) biopsy has been suggested as a compromise between systematic lymphadenectomy and no dissection in patients with low-risk endometrial cancer. Methods: Patients with stage I–II endometrial cancer had pelvic SLN and systematic pelvic-node dissection. All LNs were histopathologically examined, and the SLNs were examined by immunohistochemistry. We compared the accuracy of the nomogram at predicting LN detected with conventional histopathology (macrometastasis) and ultrastaging procedure using SLN (micrometastasis). Results: Thirty-eight of the 187 patients (20%) had pelvic LN metastases, 20 had macrometastases and 18 had micrometastases. For the prediction of macrometastases, the nomogram showed good discrimination, with an area under the receiver operating characteristic curve (AUC) of 0.76, and was well calibrated (average error =2.1%). For the prediction of micro- and macrometastases, the nomogram showed poorer discrimination, with an AUC of 0.67, and was less well calibrated (average error =10.9%). Conclusion: Our nomogram is accurate at predicting LN macrometastases but less accurate at predicting micrometastases. Our results suggest that micrometastases are an ‘intermediate state' between disease-free LN and macrometastasis. PMID:23481184
Wang, Dawei; Ren, Pinyi; Du, Qinghe; Sun, Li; Wang, Yichen
2016-01-01
The rapid proliferation of independently-designed and -deployed wireless sensor networks extremely crowds the wireless spectrum and promotes the emergence of cognitive radio sensor networks (CRSN). In CRSN, the sensor node (SN) can make full use of the unutilized licensed spectrum, and the spectrum efficiency is greatly improved. However, inevitable spectrum sensing errors will adversely interfere with the primary transmission, which may result in primary transmission outage. To compensate the adverse effect of spectrum sensing errors, we propose a reciprocally-benefited secure transmission strategy, in which SN’s interference to the eavesdropper is employed to protect the primary confidential messages while the CRSN is also rewarded with a loose spectrum sensing error probability constraint. Specifically, according to the spectrum sensing results and primary users’ activities, there are four system states in this strategy. For each state, we analyze the primary secrecy rate and the SN’s transmission rate by taking into account the spectrum sensing errors. Then, the SN’s transmit power is optimally allocated for each state so that the average transmission rate of CRSN is maximized under the constraint of the primary maximum permitted secrecy outage probability. In addition, the performance tradeoff between the transmission rate of CRSN and the primary secrecy outage probability is investigated. Moreover, we analyze the primary secrecy rate for the asymptotic scenarios and derive the closed-form expression of the SN’s transmission outage probability. Simulation results show that: (1) the performance of the SN’s average throughput in the proposed strategy outperforms the conventional overlay strategy; (2) both the primary network and CRSN benefit from the proposed strategy. PMID:27897988
Investigation of an Optimum Detection Scheme for a Star-Field Mapping System
NASA Technical Reports Server (NTRS)
Aldridge, M. D.; Credeur, L.
1970-01-01
An investigation was made to determine the optimum detection scheme for a star-field mapping system that uses coded detection resulting from starlight shining through specially arranged multiple slits of a reticle. The computer solution of equations derived from a theoretical model showed that the greatest probability of detection for a given star and background intensity occurred with the use of a single transparent slit. However, use of multiple slits improved the system's ability to reject the detection of undesirable lower intensity stars, but only by decreasing the probability of detection for lower intensity stars to be mapped. Also, it was found that the coding arrangement affected the root-mean-square star-position error and that detection is possible with error in the system's detected spin rate, though at a reduced probability.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro
2015-11-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio
2015-01-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles, B.N.
1955-05-12
Charts of the geographical distribution of the annual and seasonal D-values and their standard deviations at altitudes of 4500, 6000, and 7000 feeet over Eurasia are derived, which are used to estimate the frequency of baro system errors.
Rewriting evolution--"been there, done that".
Penny, David
2013-01-01
A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents.
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Astrophysics Data System (ADS)
Den Hartog, E. A.; Lawler, J. E.; Sobeck, J. S.; Sneden, C.; Cowan, J. J.
2011-06-01
The goal of the present work is to produce transition probabilities with very low uncertainties for a selected set of multiplets of Mn I and Mn II. Multiplets are chosen based upon their suitability for stellar abundance analysis. We report on new radiative lifetime measurements for 22 levels of Mn I from the e 8 D, z 6 P, z 6 D, z 4 F, e 8 S, and e 6 S terms and six levels of Mn II from the z 5 P and z 7 P terms using time-resolved laser-induced fluorescence on a slow atom/ion beam. New branching fractions for transitions from these levels, measured using a Fourier-transform spectrometer, are reported. When combined, these measurements yield transition probabilities for 47 transitions of Mn I and 15 transitions of Mn II. Comparisons are made to data from the literature and to Russell-Saunders (LS) theory. In keeping with the goal of producing a set of transition probabilities with the highest possible accuracy and precision, we recommend a weighted mean result incorporating our measurements on Mn I and II as well as independent measurements or calculations that we view as reliable and of a quality similar to ours. In a forthcoming paper, these Mn I/II transition probability data will be utilized to derive the Mn abundance in stars with spectra from both space-based and ground-based facilities over a 4000 Å wavelength range. With the employment of a local thermodynamic equilibrium line transfer code, the Mn I/II ionization balance will be determined for stars of different evolutionary states.
Douglas, Julie A.; Skol, Andrew D.; Boehnke, Michael
2002-01-01
Gene-mapping studies routinely rely on checking for Mendelian transmission of marker alleles in a pedigree, as a means of screening for genotyping errors and mutations, with the implicit assumption that, if a pedigree is consistent with Mendel’s laws of inheritance, then there are no genotyping errors. However, the occurrence of inheritance inconsistencies alone is an inadequate measure of the number of genotyping errors, since the rate of occurrence depends on the number and relationships of genotyped pedigree members, the type of errors, and the distribution of marker-allele frequencies. In this article, we calculate the expected probability of detection of a genotyping error or mutation as an inheritance inconsistency in nuclear-family data, as a function of both the number of genotyped parents and offspring and the marker-allele frequency distribution. Through computer simulation, we explore the sensitivity of our analytic calculations to the underlying error model. Under a random-allele–error model, we find that detection rates are 51%–77% for multiallelic markers and 13%–75% for biallelic markers; detection rates are generally lower when the error occurs in a parent than in an offspring, unless a large number of offspring are genotyped. Errors are especially difficult to detect for biallelic markers with equally frequent alleles, even when both parents are genotyped; in this case, the maximum detection rate is 34% for four-person nuclear families. Error detection in families in which parents are not genotyped is limited, even with multiallelic markers. Given these results, we recommend that additional error checking (e.g., on the basis of multipoint analysis) be performed, beyond routine checking for Mendelian consistency. Furthermore, our results permit assessment of the plausibility of an observed number of inheritance inconsistencies for a family, allowing the detection of likely pedigree—rather than genotyping—errors in the early stages of a genome scan. Such early assessments are valuable in either the targeting of families for resampling or discontinued genotyping. PMID:11791214
Error Discounting in Probabilistic Category Learning
Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.
2011-01-01
Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666
Modeling of a bubble-memory organization with self-checking translators to achieve high reliability.
NASA Technical Reports Server (NTRS)
Bouricius, W. G.; Carter, W. C.; Hsieh, E. P.; Wadia, A. B.; Jessep, D. C., Jr.
1973-01-01
Study of the design and modeling of a highly reliable bubble-memory system that has the capabilities of: (1) correcting a single 16-adjacent bit-group error resulting from failures in a single basic storage module (BSM), and (2) detecting with a probability greater than 0.99 any double errors resulting from failures in BSM's. The results of the study justify the design philosophy adopted of employing memory data encoding and a translator to correct single group errors and detect double group errors to enhance the overall system reliability.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
NASA Astrophysics Data System (ADS)
Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin
2017-03-01
We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Spectrophotometric Method for Differentiation of Human Skin Melanoma. II. Diagnostic Characteristics
NASA Astrophysics Data System (ADS)
Petruk, V. G.; Ivanov, A. P.; Kvaternyuk, S. M.; Barunb, V. V.
2016-05-01
Experimental data on the spectral dependences of the optical diffuse reflection coefficient for skin from different people with melanoma or nevus are presented in the form of the probability density of the diffuse reflection coefficient for the corresponding pigmented lesions. We propose a noninvasive technique for differentiating between malignant and benign tumors, based on measuring the diffuse reflection coefficient for a specific patient and comparing the value obtained with a pre-set threshold. If the experimental result is below the threshold, then it is concluded that the person has melanoma; otherwise, no melanoma is present. As an example, we consider the wavelength 870 nm. We determine the risk of malignant transformation of a nevus (its transition to melanoma) for different measured diffuse reflection coefficients. We have studied the errors in the method, its operating characteristics and probability characteristics as the threshold diffuse reflection coefficient is varied. We find that the diagnostic confidence, sensitivity, specificity, and effectiveness (accuracy) parameters are maximum (>0.82) for a threshold of 0.45-0.47. The operating characteristics for the proposed technique exceed the corresponding parameters for other familiar optical approaches to melanoma diagnosis. Its distinguishing feature is operation at only one wavelength, and consequently implementation of the experimental technique is simplified and made less expensive.
Asymmetries in Predictive and Diagnostic Reasoning
ERIC Educational Resources Information Center
Fernbach, Philip M.; Darlow, Adam; Sloman, Steven A.
2011-01-01
In this article, we address the apparent discrepancy between causal Bayes net theories of cognition, which posit that judgments of uncertainty are generated from causal beliefs in a way that respects the norms of probability, and evidence that probability judgments based on causal beliefs are systematically in error. One purported source of bias…
NASA Astrophysics Data System (ADS)
Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek
2017-11-01
Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.
Open Label Extension of ISIS 301012 (Mipomersen) to Treat Familial Hypercholesterolemia
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Anytime synthetic projection: Maximizing the probability of goal satisfaction
NASA Technical Reports Server (NTRS)
Drummond, Mark; Bresina, John L.
1990-01-01
A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.
Bounds on Block Error Probability for Multilevel Concatenated Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Moorthy, Hari T.; Stojanovic, Diana
1996-01-01
Maximum likelihood decoding of long block codes is not feasable due to large complexity. Some classes of codes are shown to be decomposable into multilevel concatenated codes (MLCC). For these codes, multistage decoding provides good trade-off between performance and complexity. In this paper, we derive an upper bound on the probability of block error for MLCC. We use this bound to evaluate difference in performance for different decompositions of some codes. Examples given show that a significant reduction in complexity can be achieved when increasing number of stages of decoding. Resulting performance degradation varies for different decompositions. A guideline is given for finding good m-level decompositions.
Performance of DPSK with convolutional encoding on time-varying fading channels
NASA Technical Reports Server (NTRS)
Mui, S. Y.; Modestino, J. W.
1977-01-01
The bit error probability performance of a differentially-coherent phase-shift keyed (DPSK) modem with convolutional encoding and Viterbi decoding on time-varying fading channels is examined. Both the Rician and the lognormal channels are considered. Bit error probability upper bounds on fully-interleaved (zero-memory) fading channels are derived and substantiated by computer simulation. It is shown that the resulting coded system performance is a relatively insensitive function of the choice of channel model provided that the channel parameters are related according to the correspondence developed as part of this paper. Finally, a comparison of DPSK with a number of other modulation strategies is provided.
Nasir, Hina; Javaid, Nadeem; Sher, Muhammad; Qasim, Umar; Khan, Zahoor Ali; Alrajeh, Nabil; Niaz, Iftikhar Azim
2016-01-01
This paper embeds a bi-fold contribution for Underwater Wireless Sensor Networks (UWSNs); performance analysis of incremental relaying in terms of outage and error probability, and based on the analysis proposition of two new cooperative routing protocols. Subject to the first contribution, a three step procedure is carried out; a system model is presented, the number of available relays are determined, and based on cooperative incremental retransmission methodology, closed-form expressions for outage and error probability are derived. Subject to the second contribution, Adaptive Cooperation in Energy (ACE) efficient depth based routing and Enhanced-ACE (E-ACE) are presented. In the proposed model, feedback mechanism indicates success or failure of data transmission. If direct transmission is successful, there is no need for relaying by cooperative relay nodes. In case of failure, all the available relays retransmit the data one by one till the desired signal quality is achieved at destination. Simulation results show that the ACE and E-ACE significantly improves network performance, i.e., throughput, when compared with other incremental relaying protocols like Cooperative Automatic Repeat reQuest (CARQ). E-ACE and ACE achieve 69% and 63% more throughput respectively as compared to CARQ in hard underwater environment. PMID:27420061
Potter, Gail E; Smieszek, Timo; Sailer, Kerstin
2015-09-01
Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.
Potter, Gail E.; Smieszek, Timo; Sailer, Kerstin
2015-01-01
Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0–5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models. PMID:26634122
The FERRUM Project: Experimental Transition Probabilities of [Fe II] and Astrophysical Applications
NASA Technical Reports Server (NTRS)
Hartman, H.; Derkatch, A.; Donnelly, M. P.; Gull, T.; Hibbert, A.; Johannsson, S.; Lundberg, H.; Mannervik, S.; Norlin, L. -O.; Rostohar, D.
2002-01-01
We report on experimental transition probabilities for thirteen forbidden [Fe II] lines originating from three different metastable Fe II levels. Radiative lifetimes have been measured of two metastable states by applying a laser probing technique on a stored ion beam. Branching ratios for the radiative decay channels, i.e. M1 and E2 transitions, are derived from observed intensity ratios of forbidden lines in astrophysical spectra and compared with theoretical data. The lifetimes and branching ratios are combined to derive absolute transition probabilities, A-values. We present the first experimental lifetime values for the two Fe II levels a(sup 4)G(sub 9/2) and b(sup 2)H(sub 11/2) and A-values for 13 forbidden transitions from a(sup 6)S(sub 5/2), a(sup 4)G(sub 9/2) and b(sup 4)D(sub 7/2) in the optical region. A discrepancy between the measured and calculated values of the lifetime for the b(sup 2)H(sub 11/2) level is discussed in terms of level mixing. We have used the code CIV3 to calculate transition probabilities of the a(sup 6)D-a(sup 6)S transitions. We have also studied observational branching ratios for lines from 5 other metastable Fe II levels and compared them to calculated values. A consistency in the deviation between calibrated observational intensity ratios and theoretical branching ratios for lines in a wider wavelength region supports the use of [Fe II] lines for determination of reddening.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wittmann, Christoffer; Sych, Denis; Leuchs, Gerd
2010-06-15
We investigate quantum measurement strategies capable of discriminating two coherent states probabilistically with significantly smaller error probabilities than can be obtained using nonprobabilistic state discrimination. We apply a postselection strategy to the measurement data of a homodyne detector as well as a photon number resolving detector in order to lower the error probability. We compare the two different receivers with an optimal intermediate measurement scheme where the error rate is minimized for a fixed rate of inconclusive results. The photon number resolving (PNR) receiver is experimentally demonstrated and compared to an experimental realization of a homodyne receiver with postselection. Inmore » the comparison, it becomes clear that the performance of the PNR receiver surpasses the performance of the homodyne receiver, which we prove to be optimal within any Gaussian operations and conditional dynamics.« less
The genomic structure: proof of the role of non-coding DNA.
Bouaynaya, Nidhal; Schonfeld, Dan
2006-01-01
We prove that the introns play the role of a decoy in absorbing mutations in the same way hollow uninhabited structures are used by the military to protect important installations. Our approach is based on a probability of error analysis, where errors are mutations which occur in the exon sequences. We derive the optimal exon length distribution, which minimizes the probability of error in the genome. Furthermore, to understand how can Nature generate the optimal distribution, we propose a diffusive random walk model for exon generation throughout evolution. This model results in an alpha stable exon length distribution, which is asymptotically equivalent to the optimal distribution. Experimental results show that both distributions accurately fit the real data. Given that introns also drive biological evolution by increasing the rate of unequal crossover between genes, we conclude that the role of introns is to maintain a genius balance between stability and adaptability in eukaryotic genomes.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Driver landmark and traffic sign identification in early Alzheimer's disease.
Uc, E Y; Rizzo, M; Anderson, S W; Shi, Q; Dawson, J D
2005-06-01
To assess visual search and recognition of roadside targets and safety errors during a landmark and traffic sign identification task in drivers with Alzheimer's disease. 33 drivers with probable Alzheimer's disease of mild severity and 137 neurologically normal older adults underwent a battery of visual and cognitive tests and were asked to report detection of specific landmarks and traffic signs along a segment of an experimental drive. The drivers with mild Alzheimer's disease identified significantly fewer landmarks and traffic signs and made more at-fault safety errors during the task than control subjects. Roadside target identification performance and safety errors were predicted by scores on standardised tests of visual and cognitive function. Drivers with Alzheimer's disease are impaired in a task of visual search and recognition of roadside targets; the demands of these targets on visual perception, attention, executive functions, and memory probably increase the cognitive load, worsening driving safety.
NASA Astrophysics Data System (ADS)
Pickett, Brian K.; Cassen, Patrick; Durisen, Richard H.; Link, Robert
2000-02-01
In the paper ``The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions'' by Brian K. Pickett, Patrick Cassen, Richard H. Durisen, and Robert Link (ApJ, 529, 1034 [2000]), the wrong version of Figure 10 was published as a result of an error at the Press. The correct version of Figure 10 appears below. The Press sincerely regrets this error.
A BAYESIAN APPROACH TO LOCATING THE RED GIANT BRANCH TIP MAGNITUDE. I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conn, A. R.; Parker, Q. A.; Zucker, D. B.
2011-10-20
We present a new approach for identifying the tip of the red giant branch (TRGB) which, as we show, works robustly even on sparsely populated targets. Moreover, the approach is highly adaptable to the available data for the stellar population under study, with prior information readily incorporable into the algorithm. The uncertainty in the derived distances is also made tangible and easily calculable from posterior probability distributions. We provide an outline of the development of the algorithm and present the results of tests designed to characterize its capabilities and limitations. We then apply the new algorithm to three M31 satellites:more » Andromeda I, Andromeda II, and the fainter Andromeda XXIII, using data from the Pan-Andromeda Archaeological Survey (PAndAS), and derive their distances as 731{sup (+5)+18}{sub (-4)-17} kpc, 634{sup (+2)+15}{sub (-2)-14} kpc, and 733{sup (+13)+23}{sub (-11)-22} kpc, respectively, where the errors appearing in parentheses are the components intrinsic to the method, while the larger values give the errors after accounting for additional sources of error. These results agree well with the best distance determinations in the literature and provide the smallest uncertainties to date. This paper is an introduction to the workings and capabilities of our new approach in its basic form, while a follow-up paper shall make full use of the method's ability to incorporate priors and use the resulting algorithm to systematically obtain distances to all of M31's satellites identifiable in the PAndAS survey area.« less
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
A proof of concept phase II non-inferiority criterion.
Neuenschwander, Beat; Rouyrre, Nicolas; Hollaender, Norbert; Zuber, Emmanuel; Branson, Michael
2011-06-15
Traditional phase III non-inferiority trials require compelling evidence that the treatment vs control effect bfθ is better than a pre-specified non-inferiority margin θ(NI) . The standard approach compares this margin to the 95 per cent confidence interval of the effect parameter. In the phase II setting, in order to declare Proof of Concept (PoC) for non-inferiority and proceed in the development of the drug, different criteria that are specifically tailored toward company internal decision making may be more appropriate. For example, less evidence may be needed as long as the effect estimate is reasonably convincing. We propose a non-inferiority design that addresses the specifics of the phase II setting. The requirements are that (1) the effect estimate be better than a critical threshold θ(C), and (2) the type I error with regard to θ(NI) is controlled at a pre-specified level. This design is compared with the traditional design from a frequentist as well as a Bayesian perspective, where the latter relies on the Level of Proof (LoP) metric, i.e. the probability that the true effect is better than effect values of interest. Clinical input is required to establish the value θ(C), which makes the design transparent and improves interactions within clinical teams. The proposed design is illustrated for a non-inferiority trial for a time-to-event endpoint in oncology. Copyright © 2011 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... errors or omissions that occurred before the publication of these regulations. Any reasonable method used... February 24, 1994, will be considered proper, provided that the method is consistent with the rules of...
29 CFR 18.103 - Rulings on evidence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... is more probably true than not true that the error did not materially contribute to the decision or... if explicitly not relied upon by the judge in support of the decision or order. (b) Record of offer... making of an offer in question and answer form. (c) Plain error. Nothing in this rule precludes taking...
ERIC Educational Resources Information Center
Lewis, Virginia Vimpeny
2011-01-01
Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…
2014-07-01
Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection
Rewriting Evolution—“Been There, Done That”
Penny, David
2013-01-01
A recent paper by a science journalist in Nature shows major errors in understanding phylogenies, in this case of placental mammals. The underlying unrooted tree is probably correct, but the placement of the root just reflects a well-known error from the acceleration in the rate of evolution among some myomorph rodents. PMID:23558594
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
An MEG signature corresponding to an axiomatic model of reward prediction error.
Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J
2012-01-02
Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.
Wedell, Douglas H; Moro, Rodrigo
2008-04-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize misinterpretation of the base event. In both experiments, conjunction errors were reduced when likely events were conjoined. Conjunction errors were also reduced for estimations compared with choices, with this reduction greater for likely conjuncts, an interaction effect. Shifting conceptual focus from probabilities to frequencies did not affect conjunction error rates. Analyses of numerical estimates for a subset of the problems provided support for the use of three general models by participants for generating estimates. Strikingly, the order in which the two tasks were carried out did not affect the pattern of results, supporting the idea that the mode of responding strongly determines the mode of thinking about conjunctions and hence the occurrence of the conjunction fallacy. These findings were evaluated in terms of implications for rationality of human judgment and reasoning.
Propagation of landslide inventory errors on data driven landslide susceptibility models
NASA Astrophysics Data System (ADS)
Henriques, C. S.; Zezere, J. L.; Neves, M.; Garcia, R. A. C.; Oliveira, S. C.; Piedade, A.
2009-04-01
Research on landslide susceptibility assessment developed recently worldwide has shown that quality and reliability of modelling results are more sensitive to the quality and consistence of the cartographic database than to statistical tools used in the modelling process. Particularly, the quality of the landslide inventory is of crucial importance, because data-driven models used for landside susceptibility evaluation are based on the spatial correlation between past landslide occurrences and a data set of thematic layers representing independent landslide predisposing factors. Uncertainty within landslide inventorying may be very high and is usually related to: (i) the geological and geomorphological complexity of the study area; (ii) the dominant land use and the rhythm and magnitude of land use change; (iii) the conservation level of landslide evidences (e.g., topography, vegetation, drainage) both in the field and aerial photographs; and (iv) the experience of the geomorphologist(s) that build the landslide inventory. Traditionally, landslide inventory has been made through aerial-photo interpretation and field work surveying by using standard geomorphological techniques. More recently, the interpretation of detailed geo-referenced digital ortophotomaps (pixel = 0.5 m), combined with the accurate topography, as become an additional analytical tool for landslide identification at the regional scale. The present study was performed in a test site (256 km2) within Caldas da Rainha County, located in the central part of Portugal. Detailed geo-referenced digital ortophotomaps obtained in 2004 were used to build three different landslide inventories. The landslide inventory #1 was constructed by a single regular trained geomorphologist using photo-interpretation. 408 probable slope movements were identified and geo-referenced by a point marked in the central part of the probable landslide rupture zone. The landslide inventory #2 was obtained through the examination of landslide inventory #1 by a senior geomorphologist. This second phase of photo and morphologic interpretation (pre-validation) allows the selection of 204 probable slope movements from the first landslide inventory. The landslide inventory #3 was obtained by the field verification of the total set of probable landslide zones (408 points), and was performed by 6 geomorphologists. This inventory has 193 validated slope movements, and includes 101 "new landslides" that have not been recognized by the ortophotomaps interpretation. Additionally, the field work enabled the cartographic delimitation of the slope movement depletion and accumulation zones, and the definition of landslide type. Landslide susceptibility was assessed using the three landslide inventories by using a single predictive model (logistic regression) and the same set of landslide predisposing factors to allow comparison of results. Uncertainty associated to landslide inventory errors and their propagation on landslide susceptibility results are evaluated and compared by the computation of success-rate and prediction-rate curves. The error derived from landslide inventorying is quantified by assessing the overlapping degree of susceptible areas obtained from the different prediction models.
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng; Aluyor, E.; Audu, T.
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used tomore » predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.« less
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
NASA Astrophysics Data System (ADS)
Oboh, I.; Aluyor, E.; Audu, T.
2015-03-01
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R2), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.
A Probabilistic, Facility-Centric Approach to Lightning Strike Location
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William p.; Merceret, Francis J.
2012-01-01
A new probabilistic facility-centric approach to lightning strike location has been developed. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collisionith spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Huang, Sheau-Ling; Hsieh, Ching-Lin; Wu, Ruey-Meei
2017-01-01
Background The Beck Depression Inventory II (BDI-II) and the Taiwan Geriatric Depression Scale (TGDS) are self-report scales used for assessing depression in patients with Parkinson’s disease (PD) and geriatric people. The minimal detectable change (MDC) represents the least amount of change that indicates real difference (i.e., beyond random measurement error) for a single subject. Our aim was to investigate the test-retest reliability and MDC of the BDI-II and the TGDS in people with PD. Methods Seventy patients were recruited from special clinics for movement disorders at a medical center. The patients’ mean age was 67.7 years, and 63.0% of the patients were male. All patients were assessed with the BDI-II and the TGDS twice, 2 weeks apart. We used the intraclass correlation coefficient (ICC) to determine the reliability between test and retest. We calculated the MDC based on standard error of measurement. The MDC% was calculated (i.e., by dividing the MDC by the possible maximal score of the measure). Results The test-retest reliabilities of the BDI-II/TGDS were high (ICC = 0.86/0.89). The MDCs (MDC%s) of the BDI-II and TGDS were 8.7 (13.8%) and 5.4 points (18.0%), respectively. Both measures had acceptable to nearly excellent random measurement errors. Conclusions The test-retest reliabilities of the BDI-II and the TGDS are high. The MDCs of both measures are acceptable to nearly excellent in people with PD. These findings imply that the BDI-II and the TGDS are suitable for use in a research context and in clinical settings to detect real change in a single subject. PMID:28945776
Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.
2014-01-01
Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138
The multiple infrared source GL 437
NASA Technical Reports Server (NTRS)
Wynn-Williams, C. G.; Becklin, E. E.; Beichman, C. A.; Capps, R.; Shakeshaft, J. R.
1981-01-01
Infrared and radio continuum observations of the multiple infrared source GL 437 show that it consists of a compact H II region plus two objects which are probably early B stars undergoing rapid mass loss. The group of sources appears to be a multiple system of young stars that have recently emerged from the near side of a molecular cloud. Emission in the unidentified 3.3 micron feature is associated with, but more extended than, the emission from the compact H II region; it probably arises from hot dust grains at the interface between the H II region and the molecular cloud.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
Marcie, S; Fellah, M; Chami, S; Mekki, F
2015-01-01
Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689
NASA Astrophysics Data System (ADS)
Kostyukov, V. N.; Naumenko, A. P.
2017-08-01
The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Automated Comparative Auditing of NCIT Genomic Roles Using NCBI
Cohen, Barry; Oren, Marc; Min, Hua; Perl, Yehoshua; Halper, Michael
2008-01-01
Biomedical research has identified many human genes and various knowledge about them. The National Cancer Institute Thesaurus (NCIT) represents such knowledge as concepts and roles (relationships). Due to the rapid advances in this field, it is to be expected that the NCIT’s Gene hierarchy will contain role errors. A comparative methodology to audit the Gene hierarchy with the use of the National Center for Biotechnology Information’s (NCBI’s) Entrez Gene database is presented. The two knowledge sources are accessed via a pair of Web crawlers to ensure up-to-date data. Our algorithms then compare the knowledge gathered from each, identify discrepancies that represent probable errors, and suggest corrective actions. The primary focus is on two kinds of gene-roles: (1) the chromosomal locations of genes, and (2) the biological processes in which genes plays a role. Regarding chromosomal locations, the discrepancies revealed are striking and systematic, suggesting a structurally common origin. In regard to the biological processes, difficulties arise because genes frequently play roles in multiple processes, and processes may have many designations (such as synonymous terms). Our algorithms make use of the roles defined in the NCIT Biological Process hierarchy to uncover many probable gene-role errors in the NCIT. These results show that automated comparative auditing is a promising technique that can identify a large number of probable errors and corrections for them in a terminological genomic knowledge repository, thus facilitating its overall maintenance. PMID:18486558
Clarification of terminology in medication errors: definitions and classification.
Ferner, Robin E; Aronson, Jeffrey K
2006-01-01
We have previously described and analysed some terms that are used in drug safety and have proposed definitions. Here we discuss and define terms that are used in the field of medication errors, particularly terms that are sometimes misunderstood or misused. We also discuss the classification of medication errors. A medication error is a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient. Errors can be classified according to whether they are mistakes, slips, or lapses. Mistakes are errors in the planning of an action. They can be knowledge based or rule based. Slips and lapses are errors in carrying out an action - a slip through an erroneous performance and a lapse through an erroneous memory. Classification of medication errors is important because the probabilities of errors of different classes are different, as are the potential remedies.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Peter R., E-mail: pmarti46@uwo.ca; Cool, Derek W.; Romagnoli, Cesare
2014-07-15
Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiologymore » resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was consistently greater when using spherical tumor shapes as opposed to no shape assumption. However, an assumption of spherical tumor shape for RMSE = 3.5 mm led to a mean overestimation of tumor sampling probabilities of 3%, implying that assuming spherical tumor shape may be reasonable for many prostate tumors. The authors also determined that a biopsy system would need to have a RMS needle delivery error of no more than 1.6 mm in order to sample 95% of tumors with one core. The authors’ experiments also indicated that the effect of axial-direction error on the measured tumor burden was mitigated by the 18 mm core length at 3.5 mm RMSE. Conclusions: For biopsy systems with RMSE ≥ 3.5 mm, more than one biopsy core must be taken from the majority of tumors to achieveP ≥ 95%. These observations support the authors’ perspective that some tumors of clinically significant sizes may require more than one biopsy attempt in order to be sampled during the first biopsy session. This motivates the authors’ ongoing development of an approach to optimize biopsy plans with the aim of achieving a desired probability of obtaining a sample from each tumor, while minimizing the number of biopsies. Optimized planning of within-tumor targets for MRI-3D TRUS fusion biopsy could support earlier diagnosis of prostate cancer while it remains localized to the gland and curable.« less
Error protection capability of space shuttle data bus designs
NASA Technical Reports Server (NTRS)
Proch, G. E.
1974-01-01
Error protection assurance in the reliability of digital data communications is discussed. The need for error protection on the space shuttle data bus system has been recognized and specified as a hardware requirement. The error protection techniques of particular concern are those designed into the Shuttle Main Engine Interface (MEI) and the Orbiter Multiplex Interface Adapter (MIA). The techniques and circuit design details proposed for these hardware are analyzed in this report to determine their error protection capability. The capability is calculated in terms of the probability of an undetected word error. Calculated results are reported for a noise environment that ranges from the nominal noise level stated in the hardware specifications to burst levels which may occur in extreme or anomalous conditions.
ERIC Educational Resources Information Center
Morsanyi, Kinga; Primi, Caterina; Chiesi, Francesca; Handley, Simon
2009-01-01
In three studies we looked at two typical misconceptions of probability: the representativeness heuristic, and the equiprobability bias. The literature on statistics education predicts that some typical errors and biases (e.g., the equiprobability bias) increase with education, whereas others decrease. This is in contrast with reasoning theorists'…
Robust Connectivity in Sensory and Ad Hoc Network
2011-02-01
as the prior probability is π0 = 0.8, the error probability should be capped at 0.2. This seemingly pathological result is due to the fact that the...publications and is the author of the book Multirate and Wavelet Signal Processing (Academic Press, 1998). His research interests include multiscale signal and
A Floating Potential Method for Determining Ion Density
NASA Astrophysics Data System (ADS)
Evans, John D.; Chen, Francis F.
2001-10-01
The density n in partially ionized discharges is often found from the saturation ion current Ii of a cylindrical Langmuir probe. Collisionless probe theories, however, disagree with measured I - V curves probably because of collisions^1. We use a heuristic method that yields n from probe data agreeing with microwave interferometry. Probe current I is raised to the 4/3 power and fitted to a straight line on an I^4/3-V plot. The line is extrapolated to the floating potential V_f, thus approximating I_i(V_f). The sheath thickness d_sh for V = Vf is calculated from the Child-Langmuir (CL) law, and applying the Bohm sheath criterion to the surface at r_sh = Rp + d_sh yields n when Ii = I_i(V_f). This method works, but it cannot be justified by theory. Neglected are (a) cylindrical convergence of the ion charge, (b) finite ion energy at r = r_sh, (c) ions orbiting the probe, and (d) escape of ions axially. The Allen-Boyd-Reynolds theory, which treats (a) and (b) and neglects (c) and (d), gives too low n's. Apparently the errors self-cancel, and the simple Vf method gives the right result. ^1 F.F. Chen, Phys. Plasmas 8, 3029 (2001).
NASA Technical Reports Server (NTRS)
Boehm-Vitense, Erika
1988-01-01
The ratio of the emission line fluxes for the C II and C IV lines in the lower transition regions (T = 30,000 to 100,000 K) between stellar chromospheres and transition layers is shown to depend mainly on the temperature gradient in the line emitting regions which can therefore be determined from this line ratio. From the observed constant (within the limits of observational error) ratio of the emission line fluxes of the C II (1335 A) and C IV (1550 A) lines it is concluded that the temperature gradients in the lower transition layers are similar for the large majority of stars independently of T sub eff, L, and degree of activity. This means that the temperature dependence of the damping length for the mechanical flux must be the same for all these stars. Since for different kinds of mechanical fluxes the dependence of the damping length on gas pressure and temperature is quite different, it is concluded that the same heating mechanism must be responsible for the heating of all the lower transition layers of these stars, regardless of their chromospheric activity. Only the amount of mechanical flux changes. The T Tauri stars are exceptions: their emission lines are probably mainly due to circumstellar material.
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Laser damage metrology in biaxial nonlinear crystals using different test beams
NASA Astrophysics Data System (ADS)
Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille
2008-01-01
Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.
Neuropsychological analysis of a typewriting disturbance following cerebral damage.
Boyle, M; Canter, G J
1987-01-01
Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.
Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Angel; Kohler, Johannes
2014-01-01
Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design : A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and ≤ 6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. A total sample of 364 children aged 3-11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research.
Wang, Dan; Silkie, Sarah S; Nelson, Kara L; Wuertz, Stefan
2010-09-01
Cultivation- and library-independent, quantitative PCR-based methods have become the method of choice in microbial source tracking. However, these qPCR assays are not 100% specific and sensitive for the target sequence in their respective hosts' genome. The factors that can lead to false positive and false negative information in qPCR results are well defined. It is highly desirable to have a way of removing such false information to estimate the true concentration of host-specific genetic markers and help guide the interpretation of environmental monitoring studies. Here we propose a statistical model based on the Law of Total Probability to predict the true concentration of these markers. The distributions of the probabilities of obtaining false information are estimated from representative fecal samples of known origin. Measurement error is derived from the sample precision error of replicated qPCR reactions. Then, the Monte Carlo method is applied to sample from these distributions of probabilities and measurement error. The set of equations given by the Law of Total Probability allows one to calculate the distribution of true concentrations, from which their expected value, confidence interval and other statistical characteristics can be easily evaluated. The output distributions of predicted true concentrations can then be used as input to watershed-wide total maximum daily load determinations, quantitative microbial risk assessment and other environmental models. This model was validated by both statistical simulations and real world samples. It was able to correct the intrinsic false information associated with qPCR assays and output the distribution of true concentrations of Bacteroidales for each animal host group. Model performance was strongly affected by the precision error. It could perform reliably and precisely when the standard deviation of the precision error was small (≤ 0.1). Further improvement on the precision of sample processing and qPCR reaction would greatly improve the performance of the model. This methodology, built upon Bacteroidales assays, is readily transferable to any other microbial source indicator where a universal assay for fecal sources of that indicator exists. Copyright © 2010 Elsevier Ltd. All rights reserved.
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
Schipler, Agnes; Iliakis, George
2013-09-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice.
NASA Astrophysics Data System (ADS)
Upadhya, Abhijeet; Dwivedi, Vivek K.; Singh, G.
2018-06-01
In this paper, we have analyzed the performance of dual hop radio frequency (RF)/free-space optical (FSO) fixed gain relay environment confined by atmospheric turbulence induced fading channel over FSO link and modeled using α - μ distribution. The RF hop of the amplify-and-forward scheme undergoes the Rayleigh fading and the proposed system model also considers the pointing error effect on the FSO link. A novel and accurate mathematical expression of the probability density function for a FSO link experiencing α - μ distributed atmospheric turbulence in the presence of pointing error is derived. Further, we have presented analytical expressions of outage probability and bit error rate in terms of Meijer-G function. In addition to this, a useful and mathematically tractable closed-form expression for the end-to-end ergodic capacity of the dual hop scheme in terms of bivariate Fox's H function is derived. The atmospheric turbulence, misalignment errors and various binary modulation schemes for intensity modulation on optical wireless link are considered to yield the results. Finally, we have analyzed each of the three performance metrics for high SNR in order to represent them in terms of elementary functions and the achieved analytical results are supported by computer-based simulations.
A theoretical basis for the analysis of redundant software subject to coincident errors
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.; Lee, L. D.
1985-01-01
Fundamental to the development of redundant software techniques fault-tolerant software, is an understanding of the impact of multiple-joint occurrences of coincident errors. A theoretical basis for the study of redundant software is developed which provides a probabilistic framework for empirically evaluating the effectiveness of the general (N-Version) strategy when component versions are subject to coincident errors, and permits an analytical study of the effects of these errors. The basic assumptions of the model are: (1) independently designed software components are chosen in a random sample; and (2) in the user environment, the system is required to execute on a stationary input series. The intensity of coincident errors, has a central role in the model. This function describes the propensity to introduce design faults in such a way that software components fail together when executing in the user environment. The model is used to give conditions under which an N-Version system is a better strategy for reducing system failure probability than relying on a single version of software. A condition which limits the effectiveness of a fault-tolerant strategy is studied, and it is posted whether system failure probability varies monotonically with increasing N or whether an optimal choice of N exists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nair, Ranjith
2011-09-15
We consider the problem of distinguishing, with minimum probability of error, two optical beam-splitter channels with unequal complex-valued reflectivities using general quantum probe states entangled over M signal and M' idler mode pairs of which the signal modes are bounced off the beam splitter while the idler modes are retained losslessly. We obtain a lower bound on the output state fidelity valid for any pure input state. We define number-diagonal signal (NDS) states to be input states whose density operator in the signal modes is diagonal in the multimode number basis. For such input states, we derive series formulas formore » the optimal error probability, the output state fidelity, and the Chernoff-type upper bounds on the error probability. For the special cases of quantum reading of a classical digital memory and target detection (for which the reflectivities are real valued), we show that for a given input signal photon probability distribution, the fidelity is minimized by the NDS states with that distribution and that for a given average total signal energy N{sub s}, the fidelity is minimized by any multimode Fock state with N{sub s} total signal photons. For reading of an ideal memory, it is shown that Fock state inputs minimize the Chernoff bound. For target detection under high-loss conditions, a no-go result showing the lack of appreciable quantum advantage over coherent state transmitters is derived. A comparison of the error probability performance for quantum reading of number state and two-mode squeezed vacuum state (or EPR state) transmitters relative to coherent state transmitters is presented for various values of the reflectances. While the nonclassical states in general perform better than the coherent state, the quantitative performance gains differ depending on the values of the reflectances. The experimental outlook for realizing nonclassical gains from number state transmitters with current technology at moderate to high values of the reflectances is argued to be good.« less
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
Planned Hypothesis Tests Are Not Necessarily Exempt from Multiplicity Adjustment
ERIC Educational Resources Information Center
Frane, Andrew V.
2015-01-01
Scientific research often involves testing more than one hypothesis at a time, which can inflate the probability that a Type I error (false discovery) will occur. To prevent this Type I error inflation, adjustments can be made to the testing procedure that compensate for the number of tests. Yet many researchers believe that such adjustments are…
ERIC Educational Resources Information Center
Rodriguez, Paul F.
2009-01-01
Memory systems are known to be influenced by feedback and error processing, but it is not well known what aspects of outcome contingencies are related to different memory systems. Here we use the Rescorla-Wagner model to estimate prediction errors in an fMRI study of stimulus-outcome association learning. The conditional probabilities of outcomes…
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.
Computer-aided diagnosis with potential application to rapid detection of disease outbreaks.
Burr, Tom; Koster, Frederick; Picard, Rick; Forslund, Dave; Wokoun, Doug; Joyce, Ed; Brillman, Judith; Froman, Phil; Lee, Jack
2007-04-15
Our objectives are to quickly interpret symptoms of emergency patients to identify likely syndromes and to improve population-wide disease outbreak detection. We constructed a database of 248 syndromes, each syndrome having an estimated probability of producing any of 85 symptoms, with some two-way, three-way, and five-way probabilities reflecting correlations among symptoms. Using these multi-way probabilities in conjunction with an iterative proportional fitting algorithm allows estimation of full conditional probabilities. Combining these conditional probabilities with misdiagnosis error rates and incidence rates via Bayes theorem, the probability of each syndrome is estimated. We tested a prototype of computer-aided differential diagnosis (CADDY) on simulated data and on more than 100 real cases, including West Nile Virus, Q fever, SARS, anthrax, plague, tularaemia and toxic shock cases. We conclude that: (1) it is important to determine whether the unrecorded positive status of a symptom means that the status is negative or that the status is unknown; (2) inclusion of misdiagnosis error rates produces more realistic results; (3) the naive Bayes classifier, which assumes all symptoms behave independently, is slightly outperformed by CADDY, which includes available multi-symptom information on correlations; as more information regarding symptom correlations becomes available, the advantage of CADDY over the naive Bayes classifier should increase; (4) overlooking low-probability, high-consequence events is less likely if the standard output summary is augmented with a list of rare syndromes that are consistent with observed symptoms, and (5) accumulating patient-level probabilities across a larger population can aid in biosurveillance for disease outbreaks. c 2007 John Wiley & Sons, Ltd.
RFI in hybrid loops - Simulation and experimental results.
NASA Technical Reports Server (NTRS)
Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.
1972-01-01
A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.
Quantum computing and probability.
Ferry, David K
2009-11-25
Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
ERIC Educational Resources Information Center
Thomas, Megan D.
2017-01-01
Did the World War II (WWII) GI Bill increase the probability of completing high school and further affect the probability of poverty and employment for the cohorts for whom it benefited? This paper studies whether the GI Bill, one of the largest public financial aid policies for education, affected low education levels in addition to its…
Moisture Absorption in Certain Tropical American Woods
1949-08-01
surface area was in unobstructed contact with the salt water. Similar wire mesh racks were weighted and placed on top of the specimens to keep them...Oak (Quercus alba)" Br. Guiana Honduras United States (control) II II Total absorption by 2 x 2 x 6-inch uncoated specimens. Probably sapwood ...only. /2 ~~ Probably sapwood Table 3 (Continued) Species Source Increase over 40 percent Fiddlewood (Vit ex Gaumeri) Roble Blanco (Tabebuia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D. K.; Taylor, A. S.; Edwards, T.B.
2005-06-26
The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Correction of Quenching Errors in Analytical Fluorimetry through Use of Time Resolution.
1980-05-27
QUENCHING ERRORS IN ANALYTICAL FLUORIMETRY THROUGH USE OF TIME RESOLUTION by Gary M. Hieftje and Gilbert R. Haugen Prepared for Publication in... HIEFTJE , 6 R HAUGEN NOCOIT1-6-0638 UCLASSIFIED TR-25 NL ///I//II IIIII I__I. 111122 Z .. ..12 1.~l8 .2 -4 SECuRITY CLSIIAI1 orTI PAGE MWhno. ee...in Analytical and Clinical Chemistry, vol. 3, D. M. Hercules, G. M. Hieftje , L. R. Snyder, and M4. A. Evenson, eds., Plenum Press, N.Y., 1978, ch. S
Metal production in M 33: space and time variations
NASA Astrophysics Data System (ADS)
Magrini, L.; Stanghellini, L.; Corbelli, E.; Galli, D.; Villaver, E.
2010-03-01
Context. Nearby galaxies are ideal places to study metallicity gradients in detail and their time evolution. Aims: We analyse the spatial distribution of metals in M 33 using a new sample and the literature data on H ii regions, and constrain a model of galactic chemical evolution with H ii region and planetary nebula (PN) abundances. Methods: We consider chemical abundances of a new sample of H ii regions complemented with previous data sets. We compared H ii region and PN abundances obtained with a common set of observations taken at MMT. With an updated theoretical model, we followed the time evolution of the baryonic components and chemical abundances in the disk of M 33, assuming that the galaxy is accreting gas from an external reservoir. Results: From the sample of H ii regions, we find that i) the 2D metallicity distribution has an off-centre peak located in the southern arm; ii) the oxygen abundance gradients in the northern and southern sectors, as well as in the nearest and farthest sides, are identical within the uncertainties, with slopes around -0.03-4 dex kpc-1; iii) bright giant H ii regions have a steeper abundance gradient than the other H ii regions; iv) H ii regions and PNe have O/H gradients very close within the errors; v) our updated evolutionary model is able to reproduce the new observational constraints, as well as the metallicity gradient and its evolution. Conclusions: Supported by a uniform sample of nebular spectroscopic observations, we conclude that i) the metallicity distribution in M 33 is very complex, showing a central depression in metallicity probably due to observational bias; ii) the metallicity gradient in the disk of M 33 has a slope of -0.037 ± 0.009 dex kpc-1 in the whole radial range up to ~8 kpc, and -0.044 ± 0.009 dex kpc-1 excluding the central kpc; iii) there is little evolution in the slope with time from the epoch of PN progenitor formation to the present. Full Tables 2 and 3 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/512/A63
DOE Office of Scientific and Technical Information (OSTI.GOV)
Songaila, A.; Cowie, L. L., E-mail: acowie@ifa.hawaii.edu
2014-10-01
The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure inmore » even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of unambiguously detecting variation in α using the MM method.« less
On the use of biomathematical models in patient-specific IMRT dose QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen Heming; Nelms, Benjamin E.; Tome, Wolfgang A.
2013-07-15
Purpose: To investigate the use of biomathematical models such as tumor control probability (TCP) and normal tissue complication probability (NTCP) as new quality assurance (QA) metrics.Methods: Five different types of error (MLC transmission, MLC penumbra, MLC tongue and groove, machine output, and MLC position) were intentionally induced to 40 clinical intensity modulated radiation therapy (IMRT) patient plans (20 H and N cases and 20 prostate cases) to simulate both treatment planning system errors and machine delivery errors in the IMRT QA process. The changes in TCP and NTCP for eight different anatomic structures (H and N: CTV, GTV, both parotids,more » spinal cord, larynx; prostate: CTV, rectal wall) were calculated as the new QA metrics to quantify the clinical impact on patients. The correlation between the change in TCP/NTCP and the change in selected DVH values was also evaluated. The relation between TCP/NTCP change and the characteristics of the TCP/NTCP curves is discussed.Results:{Delta}TCP and {Delta}NTCP were summarized for each type of induced error and each structure. The changes/degradations in TCP and NTCP caused by the errors vary widely depending on dose patterns unique to each plan, and are good indicators of each plan's 'robustness' to that type of error.Conclusions: In this in silico QA study the authors have demonstrated the possibility of using biomathematical models not only as patient-specific QA metrics but also as objective indicators that quantify, pretreatment, a plan's robustness with respect to possible error types.« less
Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations
NASA Astrophysics Data System (ADS)
Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong
2017-01-01
Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.
Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence
Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos
2016-01-01
When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
A Short History of Probability Theory and Its Applications
ERIC Educational Resources Information Center
Debnath, Lokenath; Basu, Kanadpriya
2015-01-01
This paper deals with a brief history of probability theory and its applications to Jacob Bernoulli's famous law of large numbers and theory of errors in observations or measurements. Included are the major contributions of Jacob Bernoulli and Laplace. It is written to pay the tricentennial tribute to Jacob Bernoulli, since the year 2013…
The Conjunction Fallacy: A Misunderstanding about Conjunction?
ERIC Educational Resources Information Center
Tentori, Katya; Bonini, Nicolao; Osherson, Daniel
2004-01-01
It is easy to construct pairs of sentences X, Y that lead many people to ascribe higher probability to the conjunction X-and-Y than to the conjuncts X, Y. Whether an error is thereby committed depends on reasoners' interpretation of the expressions "probability" and "and." We report two experiments designed to clarify the normative status of…
ERIC Educational Resources Information Center
Dougherty, Michael R.; Sprenger, Amber
2006-01-01
This article introduces 2 new sources of bias in probability judgment, discrimination failure and inhibition failure, which are conceptualized as arising from an interaction between error prone memory processes and a support theory like comparison process. Both sources of bias stem from the influence of irrelevant information on participants'…
Aggregate and Individual Replication Probability within an Explicit Model of the Research Process
ERIC Educational Resources Information Center
Miller, Jeff; Schwarz, Wolf
2011-01-01
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…
Strategic planning to reduce medical errors: Part I--diagnosis.
Waldman, J Deane; Smith, Howard L
2012-01-01
Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.
Taylor, June S.; Mushak, Paul; Coleman, Joseph E.
1970-01-01
Electron spin resonance (esr) spectra of Cu(II) and Co(II) carbonic anhydrase, and a spin-labeled sulfonamide complex of the Zn(II) enzyme, are reported. The coordination geometry of Cu(II) bound in the enzyme appears to have approximately axial symmetry. Esr spectra of enzyme complexes with metal-binding anions also show axial symmetry and greater covalency, in the order ethoxzolamide < SH- < N3- ≤ CN-. Well-resolved superhyperfine structure in the spectrum of the cyanide complex suggests the presence of two, and probably three, equivalent nitrogen ligands from the protein. Esr spectra of the Co(II) enzyme and its complexes show two types of Co(II) environment, one typical of the native enzyme and the 1:1 CN- complex, and one typical of a 2:1 CN- complex. Co(II) in the 2:1 complex appears to be low-spin and probably has a coordination number of 5. Binding of a spin-labeled sulfonamide to the active center immobilizes the free radical. The similarity of the esr spectra of spin-labeled Zn(II) and Co(II) carbonic anhydrases suggests that the conformation at the active center is similar in the two metal derivatives. PMID:4320976
Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.
2016-01-01
Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.
NASA Astrophysics Data System (ADS)
Lawler, James E.; Wood, M. P.; Sneden, C.; Cowan, J. J.
2014-01-01
New atomic transition probability measurements for 371 lines of Ni I in the UV through near IR are reported. These results are used to determine the Ni abundance of the Sun and a very metal-poor main-sequence turnoff dwarf star over a range of wavelength and E. P. values to search for non-LTE effects. For reasons only partially understood, strong lines of Ni I are unusually prone to optical depth errors in emission studies on laboratory sources. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a 3 m echelle spectrometer are combined with published radiative lifetimes from laser induced fluorescence measurements to determine these new transition probabilities. The large echelle spectrometer provides essential UV sensitivity, spectral resolution, and especially freedom from multiplex noise that is needed to eliminate optical depth errors. There is quite good agreement with earlier, but less extensive, sets of measurements by Blackwell et al. (MNRAS 1989, 236, 235) and Wickliffe & Lawler (ApJS 1997 110, 1163). The new Ni I data are applied to high resolution visible and UV spectra of the Sun and HD 84937 to derive new, more accurate nickel abundances. In the Sun we find log(eps(Ni I)) = 6.28 (sigma = 0.06, 75 lines) and in HD 84937 we find we find log(eps(Ni I)) = 3.89 (sigma = 0.09, 77 lines), yielding [Ni/Fe] = -0.08 from log(eps(Fe)) = 7.52 in the Sun and log(eps(Fe)) = 5.19 in HD 84937. The Saha balance of Ni in HD 84937 is confirmed using 8 lines of Ni II, although these UV ion lines are somewhat saturated. This work is supported by NASA grant NNX10AN93G (JEL) and NSF grants AST-0908978 and AST-1211585 (CS).
Development of a Nonlinear Probability of Collision Tool for the Earth Observing System
NASA Technical Reports Server (NTRS)
McKinley, David P.
2006-01-01
The Earth Observing System (EOS) spacecraft Terra, Aqua, and Aura fly in constellation with several other spacecraft in 705-kilometer mean altitude sun-synchronous orbits. All three spacecraft are operated by the Earth Science Mission Operations (ESMO) Project at Goddard Space Flight Center (GSFC). In 2004, the ESMO project began assessing the probability of collision of the EOS spacecraft with other space objects. In addition to conjunctions with high relative velocities, the collision assessment method for the EOS spacecraft must address conjunctions with low relative velocities during potential collisions between constellation members. Probability of Collision algorithms that are based on assumptions of high relative velocities and linear relative trajectories are not suitable for these situations; therefore an algorithm for handling the nonlinear relative trajectories was developed. This paper describes this algorithm and presents results from its validation for operational use. The probability of collision is typically calculated by integrating a Gaussian probability distribution over the volume swept out by a sphere representing the size of the space objects involved in the conjunction. This sphere is defined as the Hard Body Radius. With the assumption of linear relative trajectories, this volume is a cylinder, which translates into simple limits of integration for the probability calculation. For the case of nonlinear relative trajectories, the volume becomes a complex geometry. However, with an appropriate choice of coordinate systems, the new algorithm breaks down the complex geometry into a series of simple cylinders that have simple limits of integration. This nonlinear algorithm will be discussed in detail in the paper. The nonlinear Probability of Collision algorithm was first verified by showing that, when used in high relative velocity cases, it yields similar answers to existing high relative velocity linear relative trajectory algorithms. The comparison with the existing high velocity/linear theory will also be used to determine at what relative velocity the analysis should use the new nonlinear theory in place of the existing linear theory. The nonlinear algorithm was also compared to a known exact solution for the probability of collision between two objects when the relative motion is strictly circular and the error covariance is spherically symmetric. Figure I shows preliminary results from this comparison by plotting the probabilities calculated from the new algorithm and those from the exact solution versus the Hard Body Radius to Covariance ratio. These results show about 5% error when the Hard Body Radius is equal to one half the spherical covariance magnitude. The algorithm was then combined with a high fidelity orbit state and error covariance propagator into a useful tool for analyzing low relative velocity nonlinear relative trajectories. The high fidelity propagator is capable of using atmospheric drag, central body gravitational, solar radiation, and third body forces to provide accurate prediction of the relative trajectories and covariance evolution. The covariance propagator also includes a process noise model to ensure realistic evolutions of the error covariance. This paper will describe the integration of the nonlinear probability algorithm and the propagators into a useful collision assessment tool. Finally, a hypothetical case study involving a low relative velocity conjunction between members of the Earth Observation System constellation will be presented.
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Analysis of synchronous digital-modulation schemes for satellite communication
NASA Technical Reports Server (NTRS)
Takhar, G. S.; Gupta, S. C.
1975-01-01
The multipath communication channel for space communications is modeled as a multiplicative channel. This paper discusses the effects of multiplicative channel processes on the symbol error rate for quadrature modulation (QM) digital modulation schemes. An expression for the upper bound on the probability of error is derived and numerically evaluated. The results are compared with those obtained for additive channels.
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett
2014-09-01
Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.
Use of genetically engineered swine to elucidate testis function in the boar
USDA-ARS?s Scientific Manuscript database
The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are abundant within the testis, suggesting a critical role. Gene coding errors prevent their production in many species, but both genes are functional in swine. We have demonstrated that GnRHR-II localizes to porcine Le...
NASA Astrophysics Data System (ADS)
Gao, Cheng-Yan; Wang, Guan-Yu; Zhang, Hao; Deng, Fu-Guo
2017-01-01
We present a self-error-correction spatial-polarization hyperentanglement distribution scheme for N-photon systems in a hyperentangled Greenberger-Horne-Zeilinger state over arbitrary collective-noise channels. In our scheme, the errors of spatial entanglement can be first averted by encoding the spatial-polarization hyperentanglement into the time-bin entanglement with identical polarization and defined spatial modes before it is transmitted over the fiber channels. After transmission over the noisy channels, the polarization errors introduced by the depolarizing noise can be corrected resorting to the time-bin entanglement. Finally, the parties in quantum communication can in principle share maximally hyperentangled states with a success probability of 100%.
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2011 CFR
2011-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2013 CFR
2013-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2014 CFR
2014-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
NASA Astrophysics Data System (ADS)
Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae
In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.
Many tests of significance: new methods for controlling type I errors.
Keselman, H J; Miller, Charles W; Holland, Burt
2011-12-01
There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Flint, Richard; Windsor, John A
2004-04-01
The physiological response to treatment is a better predictor of outcome in acute pancreatitis than are traditional static measures. Retrospective diagnostic test study. The criterion standard was Organ Failure Score (OFS) and Acute Physiology and Chronic Health Evaluation II (APACHE II) score at the time of hospital admission. Intensive care unit of a tertiary referral center, Auckland City Hospital, Auckland, New Zealand. Consecutive sample of 92 patients (60 male, 32 female; median age, 61 years; range, 24-79 years) with severe acute pancreatitis. Twenty patients were not included because of incomplete data. The cause of pancreatitis was gallstones (42%), alcohol use (27%), or other (31%). At hospital admission, the mean +/- SD OFS was 8.1 +/- 6.1, and the mean +/- SD APACHE II score was 19.9 +/- 8.2. All cases were managed according to a standardized protocol. There was no randomization or testing of any individual interventions. Survival and death. There were 32 deaths (pretest probability of dying was 35%). The physiological response to treatment was more accurate in predicting the outcome than was OFS or APACHE II score at hospital admission. For example, 17 patients had an initial OFS of 7-8 (posttest probability of dying was 58%); after 48 hours, 7 had responded to treatment (posttest probability of dying was 28%), and 10 did not respond (posttest probability of dying was 82%). The effect of the change in OFS and APACHE II score was graphically depicted by using a series of logistic regression equations. The resultant sigmoid curve suggests that there is a midrange of scores (the steep portion of the graph) within which the probability of death is most affected by the response to intensive care treatment. Measuring the initial severity of pancreatitis combined with the physiological response to intensive care treatment is a practical and clinically relevant approach to predicting death in patients with severe acute pancreatitis.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Airborne radar technology for windshear detection
NASA Technical Reports Server (NTRS)
Hibey, Joseph L.; Khalaf, Camille S.
1988-01-01
The objectives and accomplishments of the two-and-a-half year effort to describe how returns from on-board Doppler radar are to be used to detect the presence of a wind shear are reported. The problem is modeled as one of first passage in terms of state variables, the state estimates are generated by a bank of extended Kalman filters working in parallel, and the decision strategy involves the use of a voting algorithm for a series of likelihood ratio tests. The performance issue for filtering is addressed in terms of error-covariance reduction and filter divergence, and the performance issue for detection is addressed in terms of using a probability measure transformation to derive theoretical expressions for the error probabilities of a false alarm and a miss.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Advances in the meta-analysis of heterogeneous clinical trials II: The quality effects model.
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines the performance of the updated quality effects (QE) estimator for meta-analysis of heterogeneous studies. It is shown that this approach leads to a decreased mean squared error (MSE) of the estimator while maintaining the nominal level of coverage probability of the confidence interval. Extensive simulation studies confirm that this approach leads to the maintenance of the correct coverage probability of the confidence interval, regardless of the level of heterogeneity, as well as a lower observed variance compared to the random effects (RE) model. The QE model is robust to subjectivity in quality assessment down to completely random entry, in which case its MSE equals that of the RE estimator. When the proposed QE method is applied to a meta-analysis of magnesium for myocardial infarction data, the pooled mortality odds ratio (OR) becomes 0.81 (95% CI 0.61-1.08) which favors the larger studies but also reflects the increased uncertainty around the pooled estimate. In comparison, under the RE model, the pooled mortality OR is 0.71 (95% CI 0.57-0.89) which is less conservative than that of the QE results. The new estimation method has been implemented into the free meta-analysis software MetaXL which allows comparison of alternative estimators and can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-15
... contact the Census Bureau's Social, Economic and Housing Statistics Division at (301) 763- 3243. Under the... the use of probability sampling to create the sample. For additional information about the accuracy of... consists of the error that arises from the use of probability sampling to create the sample. \\2\\ These...
1999-01-01
34. twenty-first century. These papers illustrate topics such as a development ofvirtual environment applications, different uses ofVRML in information system...interfaces, an examination of research in virtual reality environment interfaces, and five approaches to supporting changes’ in virtuaI environments...we get false negatives that contribute to the probability of false rejection Prrj). { l � Taking these error probabilities into account, we define a
ERIC Educational Resources Information Center
Nelson, Jonathan D.
2007-01-01
Reports an error in "Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain" by Jonathan D. Nelson (Psychological Review, 2005[Oct], Vol 112[4], 979-999). In Table 13, the data should indicate that 7% of females had short hair and 93% of females had long hair. The calculations and discussion in the article…
1982-10-13
35. . Wiese, W.L., Smith, M.W., and Miles , B.M. (1969) Atomic Transition Probabilities, Vol. II, NSRDS-NBS 22. 8. Green, B.D., private communication...sidearms simultane- ously changes the flow velocity (that is, the residence time) and the ratio of charge to number density E/N in the discharge plasma , as...Levels, Vol. I, NSRDS-NBS 35. 7. Wiese, W. L., Smith, M. W., and Miles , B. M. (1969’, Atomic Transition Probabilities, Vol. II, NSRDS-NBS 22. 8. Green, B
Frequency, probability, and prediction: easy solutions to cognitive illusions?
Griffin, D; Buehler, R
1999-02-01
Many errors in probabilistic judgment have been attributed to people's inability to think in statistical terms when faced with information about a single case. Prior theoretical analyses and empirical results imply that the errors associated with case-specific reasoning may be reduced when people make frequentistic predictions about a set of cases. In studies of three previously identified cognitive biases, we find that frequency-based predictions are different from-but no better than-case-specific judgments of probability. First, in studies of the "planning fallacy, " we compare the accuracy of aggregate frequency and case-specific probability judgments in predictions of students' real-life projects. When aggregate and single-case predictions are collected from different respondents, there is little difference between the two: Both are overly optimistic and show little predictive validity. However, in within-subject comparisons, the aggregate judgments are significantly more conservative than the single-case predictions, though still optimistically biased. Results from studies of overconfidence in general knowledge and base rate neglect in categorical prediction underline a general conclusion. Frequentistic predictions made for sets of events are no more statistically sophisticated, nor more accurate, than predictions made for individual events using subjective probability. Copyright 1999 Academic Press.
Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun
2016-09-20
The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisi, M. C.; Clariá, J. J.; Marcionni, N.
2015-05-15
We obtained spectra of red giants in 15 Small Magellanic Cloud (SMC) clusters in the region of the Ca ii lines with FORS2 on the Very Large Telescope. We determined the mean metallicity and radial velocity with mean errors of 0.05 dex and 2.6 km s{sup −1}, respectively, from a mean of 6.5 members per cluster. One cluster (B113) was too young for a reliable metallicity determination and was excluded from the sample. We combined the sample studied here with 15 clusters previously studied by us using the same technique, and with 7 clusters whose metallicities determined by other authorsmore » are on a scale similar to ours. This compilation of 36 clusters is the largest SMC cluster sample currently available with accurate and homogeneously determined metallicities. We found a high probability that the metallicity distribution is bimodal, with potential peaks at −1.1 and −0.8 dex. Our data show no strong evidence of a metallicity gradient in the SMC clusters, somewhat at odds with recent evidence from Ca ii triplet spectra of a large sample of field stars. This may be revealing possible differences in the chemical history of clusters and field stars. Our clusters show a significant dispersion of metallicities, whatever age is considered, which could be reflecting the lack of a unique age–metallicity relation in this galaxy. None of the chemical evolution models currently available in the literature satisfactorily represents the global chemical enrichment processes of SMC clusters.« less
Tsagakis, Konstantinos; Tossios, Paschalis; Kamler, Markus; Benedik, Jaroslav; Natour, Dorgam; Eggebrecht, Holger; Piotrowski, Jarowit; Jakob, Heinz
2011-11-01
The DeBakey classification was used to discriminate the extent of acute aortic dissection (AD) and was correlated to long-term outcome and re-intervention rate. A slight modification of type II subgroup definition was applied by incorporating the aortic arch, when full resectability of the dissection process was given. Between January 2001 and March 2010, 118 patients (64% male, mean age 59 years) underwent surgery for acute AD. As many as 74 were operated on for type I and 44 for type II AD. Complete resection of all entry sites was performed, including antegrade stent grafting for proximal descending lesions. Patients were comparable with respect to demographics and preoperative hemodynamic status. They underwent isolated ascending replacement, hemiarch, or total arch replacement in 7%, 26%, and 67% in type I, versus 27%, 37%, and 36% in type II, respectively. Additional descending stent grafting was performed in 33/74 (45%) type I patients. In-hospital mortality was 14%, 16% (12/74) in type I versus 9% (4/44, type II), p=0.405. After 5 years, the estimated survival rate was 63% in type I versus 80% in type II, p=0.135. In type II, no distal aortic re-intervention was required. In type I, the freedom of distal re-interventions was 82% in patients with additional stent grafting versus 53% in patients without, p=0.022. The slightly modified DeBakey classification exactly reflects late outcome and aortic re-intervention probability. Thus, in type II patients, the aorta seems to be healed without any probability of later re-operation or re-intervention. Copyright © 2011 European Association for Cardio-Thoracic Surgery. Published by Elsevier B.V. All rights reserved.
Schipler, Agnes; Iliakis, George
2013-01-01
Although the DNA double-strand break (DSB) is defined as a rupture in the double-stranded DNA molecule that can occur without chemical modification in any of the constituent building blocks, it is recognized that this form is restricted to enzyme-induced DSBs. DSBs generated by physical or chemical agents can include at the break site a spectrum of base alterations (lesions). The nature and number of such chemical alterations define the complexity of the DSB and are considered putative determinants for repair pathway choice and the probability that errors will occur during this processing. As the pathways engaged in DSB processing show distinct and frequently inherent propensities for errors, pathway choice also defines the error-levels cells opt to accept. Here, we present a classification of DSBs on the basis of increasing complexity and discuss how complexity may affect processing, as well as how it may cause lethal or carcinogenic processing errors. By critically analyzing the characteristics of DSB repair pathways, we suggest that all repair pathways can in principle remove lesions clustering at the DSB but are likely to fail when they encounter clusters of DSBs that cause a local form of chromothripsis. In the same framework, we also analyze the rational of DSB repair pathway choice. PMID:23804754
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Reducing number entry errors: solving a widespread, serious problem.
Thimbleby, Harold; Cairns, Paul
2010-10-06
Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).
Alderete, John; Davies, Monica
2018-04-01
This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.
Fish debris record the hydrothermal activity in the Atlantis II deep sediments (Red Sea)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oudin, E.; Cocherie, A.
1988-01-01
The REE and U, Th, Zr, Hf, Sc have been analyzed in samples from Atlantis II and Shaban/Jean Charcot Deeps in the Red Sea. The high Zr/Hf ratio in some sediments indicates the presence of fish debris or of finely crystallized apatite. The positive ..sigma..REE vs P/sub 2/O/sub 5/ and ..sigma..REE vs Zr/Hf correlations show that fish debris and finely crystallized apatite are the main REE sink in Atlantis II Deep sediments as in other marine environments. The hydrothermal sediments and the fish debris concentrates have similar REE patterns, characterized by a LREE enrichment and a large positive Eu anomaly.more » This REE pattern is also observed in E.P.R. hydrothermal solutions. Fish debris from marine environments acquire their REE content and signature mostly from sea water during early diagenesis. The hydrothermal REE signature of Atlantis II Deep fish debris indicate that they probably record the REE signature of their hydrothermal sedimentation and diagenetic environment. The different REE signatures of the Shaban/Jean Charcot and Atlantis II Deep hydrothermal sediments suggest a sea water-dominated brine in the Shaban/Jean Charcot Deep as opposed to the predominantly hydrothermal brine in Atlantis II Deep. Atlantis II Deep fish debris are also characterized by their high U but low Th contents. Their low Th contents probably reflect the low Th content of the various possible sources (sea water, brine, sediments). Their U contents are probably controlled by the redox conditions of sedimentation.« less
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Super-global distortion correction for a rotational C-arm x-ray image intensifier.
Liu, R R; Rudin, S; Bednarek, D R
1999-09-01
Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.
Baldwin, G S; Waley, S G; Abraham, E P
1979-01-01
1. Four histidine-containing peptides have been isolated from a tryptic digest of the Zn2+-requiring beta-lactamase II from Bacillus cereus. One of these peptides probably contains two histidine residues. 2. The presence of one equivalent of Zn2+ substantially decreases the rate of exchange of the C-2 proton in at least two and probably three of the histidine residues of these peptides for solvent 3H. 3. It is concluded that peptides containing at least two of the three histidine residues acting as Zn2+ ligands at the tighter Zn2+-binding site of beta-lactamase II have been identified. PMID:314287
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Tentori, Katya; Chater, Nick; Crupi, Vincenzo
2016-04-01
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.
Performance of optimum detector structures for noisy intersymbol interference channels
NASA Technical Reports Server (NTRS)
Womer, J. D.; Fritchman, B. D.; Kanal, L. N.
1971-01-01
The errors which arise in transmitting digital information by radio or wireline systems because of additive noise from successively transmitted signals interfering with one another are described. The probability of error and the performance of optimum detector structures are examined. A comparative study of the performance of certain detector structures and approximations to them, and the performance of a transversal equalizer are included.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul
2006-09-01
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.
NASA Astrophysics Data System (ADS)
Audenaert, Koenraad M. R.; Mosonyi, Milán
2014-10-01
We consider the multiple hypothesis testing problem for symmetric quantum state discrimination between r given states σ1, …, σr. By splitting up the overall test into multiple binary tests in various ways we obtain a number of upper bounds on the optimal error probability in terms of the binary error probabilities. These upper bounds allow us to deduce various bounds on the asymptotic error rate, for which it has been hypothesized that it is given by the multi-hypothesis quantum Chernoff bound (or Chernoff divergence) C(σ1, …, σr), as recently introduced by Nussbaum and Szkoła in analogy with Salikhov's classical multi-hypothesis Chernoff bound. This quantity is defined as the minimum of the pairwise binary Chernoff divergences min _{j
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Ke; Cai Jing; Brookeman, James
2006-09-15
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, E; Harrison, A; Eldredge-Hindy, H
Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less
Deficits on irregular verbal morphology in Italian-speaking Alzheimer's disease patients
Walenski, Matthew; Sosta, Katiuscia; Cappa, Stefano; Ullman, Michael T.
2010-01-01
Studies of English have shown that temporal-lobe patients, including those with Alzheimer's disease, are spared at processing real and novel regular inflected forms (e.g., blick → blicked; walk → walked), but impaired at real and novel irregular forms (e.g., spling → splang; dig → dug). Here we extend the investigation cross-linguistically to the more complex system of Italian verbal morphology, allowing us to probe the generality of the previous findings in English, as well as to test different explanatory accounts of inflectional morphology. We examined the production of real and novel regular and irregular past-participle and present-tense forms by native Italian-speaking healthy control subjects and patients with probable Alzheimer's disease. Compared to the controls, the patients were impaired at inflecting real irregular verbs but not real regular verbs both for past-participle and present-tense forms, but were not impaired at real regular verbs either for past-participle or present-tense forms. For novel past participles, the patients exhibited this same pattern of impaired production of class II (irregular) forms but spared class I (regular) production. In the present tense, patients were impaired at the production of class II forms (which are regular in the present tense), but spared at production of class I (regular) forms. Contrary to the pattern observed in English, the errors made by the patients on irregulars did not reveal a predominance of regularization errors (e.g., dig → digged). The findings thus partly replicate prior findings from English, but also reveal new patterns from a language with a more complex morphological system that includes verb classes (which are not possible to test in English). The demonstration of an irregular deficit following temporal-lobe damage in a language other than English reveals the cross-linguistic generality of the basic effect, while also elucidating important language-specific differences in the neuro-cognitive basis of regular and irregular morphological forms. PMID:19428387
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Ángel; Kohler, Johannes
2014-01-01
Background Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research. PMID:24560253
Automatic detection of anomalies in screening mammograms
2013-01-01
Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Wenner, Joshua B; Norena, Monica; Khan, Nadia; Palepu, Anita; Ayas, Najib T; Wong, Hubert; Dodek, Peter M
2009-09-01
Although reliability of severity of illness and predicted probability of hospital mortality have been assessed, interrater reliability of the abstraction of primary and other intensive care unit (ICU) admitting diagnoses and underlying comorbidities has not been studied. Patient data from one ICU were originally abstracted and entered into an electronic database by an ICU nurse. A research assistant reabstracted patient demographics, ICU admitting diagnoses and underlying comorbidities, and elements of Acute Physiology and Chronic Health Evaluation II (APACHE II) score from 100 random patients of 474 admitted during 2005 using an identical electronic database. Chamberlain's percent positive agreement was used to compare diagnoses and comorbidities between the 2 data abstractors. A kappa statistic was calculated for demographic variables, Glasgow Coma Score, APACHE II chronic health points, and HIV status. Intraclass correlation was calculated for acute physiology points and predicted probability of hospital mortality. Percent positive agreement for ICU primary and other admitting diagnoses ranged from 0% (primary brain injury) to 71% (sepsis), and for underlying comorbidities, from 40% (coronary artery bypass graft) to 100% (HIV). Agreement as measured by kappa statistic was strong for race (0.81) and age points (0.95), moderate for chronic health points (0.50) and HIV (0.66), and poor for Glasgow Coma Score (0.36). Intraclass correlation showed a moderate-high agreement for acute physiology points (0.88) and predicted probability of hospital mortality (0.71). Reliability for ICU diagnoses and elements of the APACHE II score is related to the objectivity of primary data in the medical charts.
Opioid receptors regulate blocking and overexpectation of fear learning in conditioned suppression.
Arico, Carolyn; McNally, Gavan P
2014-04-01
Endogenous opioids play an important role in prediction error during fear learning. However, the evidence for this role has been obtained almost exclusively using the species-specific defense response of freezing as the measure of learned fear. It is unknown whether opioid receptors regulate predictive fear learning when other measures of learned fear are used. Here, we used conditioned suppression as the measure of learned fear to assess the role of opioid receptors in fear learning. Experiment 1a studied associative blocking of fear learning. Rats in an experimental group received conditioned stimulus A (CSA) + training in Stage I and conditioned stimulus A and B (CSAB) + training in Stage II, whereas rats in a control group received only CSAB + training in Stage II. The prior fear conditioning of CSA blocked fear learning to conditioned stimulus B (CSB) in the experimental group. In Experiment 1b, naloxone (4 mg/kg) administered before Stage II prevented this blocking, thereby enabling normal fear learning to CSB. Experiment 2a studied overexpectation of fear. Rats received CSA + training and CSB + training in Stage I, and then rats in the experimental group received CSAB + training in Stage II whereas control rats did not. The Stage II compound training of CSAB reduced fear to CSA and CSB on test. In Experiment 2b, naloxone (4 mg/kg) administered before Stage II prevented this overexpectation. These results show that opioid receptors regulate Pavlovian fear learning, augmenting learning in response to positive prediction error and impairing learning in response to negative prediction error, when fear is assessed via conditioned suppression. These effects are identical to those observed when freezing is used as the measure of learned fear. These findings show that the role for opioid receptors in regulating fear learning extends across multiple measures of learned fear.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Intervention strategies for the management of human error
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1993-01-01
This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.
NASA Technical Reports Server (NTRS)
Chen, Chien-Chung; Gardner, Chester S.
1989-01-01
Given the rms transmitter pointing error and the desired probability of bit error (PBE), it can be shown that an optimal transmitter antenna gain exists which minimizes the required transmitter power. Given the rms local oscillator tracking error, an optimum receiver antenna gain can be found which optimizes the receiver performance. The impact of pointing and tracking errors on the design of direct-detection pulse-position modulation (PPM) and heterodyne noncoherent frequency-shift keying (NCFSK) systems are then analyzed in terms of constraints on the antenna size and the power penalty incurred. It is shown that in the limit of large spatial tracking errors, the advantage in receiver sensitivity for the heterodyne system is quickly offset by the smaller antenna gain and the higher power penalty due to tracking errors. In contrast, for systems with small spatial tracking errors, the heterodyne system is superior because of the higher receiver sensitivity.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors
Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.
2013-01-01
The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar
2017-06-01
In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.
Validation of Metrics as Error Predictors
NASA Astrophysics Data System (ADS)
Mendling, Jan
In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.
Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.
Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.
Youth Attitude Tracking Study II Wave 17 -- Fall 1986.
1987-06-01
decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2012 CFR
2012-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2011 CFR
2011-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2010 CFR
2010-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Astrometry in the globular cluster M13. II. Membership probabilities from old proper motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cudworth, K.
Astrometric cluster membership probabilities have been derived from proper motions measured by other authors for stars in the region of the globular cluster M13. Several stars of individual interest are discussed.
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Opioid receptors mediate direct predictive fear learning: evidence from one-trial blocking.
Cole, Sindy; McNally, Gavan P
2007-04-01
Pavlovian fear learning depends on predictive error, so that fear learning occurs when the actual outcome of a conditioning trial exceeds the expected outcome. Previous research has shown that opioid receptors, including mu-opioid receptors in the ventrolateral quadrant of the midbrain periaqueductal gray (vlPAG), mediate such predictive fear learning. Four experiments reported here used a within-subject one-trial blocking design to study whether opioid receptors mediate a direct or indirect action of predictive error on Pavlovian association formation. In Stage I, rats were trained to fear conditioned stimulus (CS) A by pairing it with shock. In Stage II, CSA and CSB were co-presented once and co-terminated with shock. Two novel stimuli, CSC and CSD, were also co-presented once and co-terminated with shock in Stage II. The results showed one-trial blocking of fear learning (Experiment 1) as well as one-trial unblocking of fear learning when Stage II training employed a higher intensity footshock than was used in Stage I (Experiment 2). Systemic administrations of the opioid receptor antagonist naloxone (Experiment 3) or intra-vlPAG administrations of the selective mu-opioid receptor antagonist CTAP (Experiment 4) prior to Stage II training prevented one-trial blocking. These results show that opioid receptors mediate the direct actions of predictive error on Pavlovian association formation.
The efficacy of three objective systems for identifying beef cuts that can be guaranteed tender.
Wheeler, T L; Vote, D; Leheska, J M; Shackelford, S D; Belk, K E; Wulf, D M; Gwartney, B L; Koohmaraie, M
2002-12-01
The objective of this study was to determine the accuracy of three objective systems (prototype BeefCam, colorimeter, and slice shear force) for identifying guaranteed tender beef. In Phase I, 308 carcasses (105 Top Choice, 101 Low Choice, and 102 Select) from two commercial plants were tested. In Phase II, 400 carcasses (200 rolled USDA Select and 200 rolled USDA Choice) from one commercial plant were tested. The three systems were evaluated based on progressive certification of the longissimus as "tender" in 10% increments (the best 10, 20, 30%, etc., certified as "tender" by each technology; 100% certification would mean no sorting for tenderness). In Phase I, the error (percentage of carcasses certified as tender that had Warner-Bratzler shear force of > or = 5 kg at 14 d postmortem) for 100% certification using all carcasses was 14.1%. All certification levels up to 80% (slice shear force) and up to 70% (colorimeter) had less error (P < 0.05) than 100% certification. Errors in all levels of certification by prototype BeefCam (13.8 to 9.7%) were not different (P > 0.05) from 100% certification. In Phase I, the error for 100% certification for USDA Select carcasses was 30.7%. For Select carcasses, all slice shear force certification levels up to 60% (0 to 14.8%) had less error (P < 0.05) than 100% certification. For Select carcasses, errors in all levels of certification by colorimeter (20.0 to 29.6%) and by BeefCam (27.5 to 31.4%) were not different (P > 0.05) from 100% certification. In Phase II, the error for 100% certification for all carcasses was 9.3%. For all levels of slice shear force certification less than 90% (for all carcasses) or less than 80% (Select carcasses), errors in tenderness certification were less than (P < 0.05) for 100% certification. In Phase II, for all carcasses or Select carcasses, colorimeter and prototype BeefCam certifications did not significantly reduce errors (P > 0.05) compared to 100% certification. Thus, the direct measure of tenderness provided by slice shear force results in more accurate identification of "tender" beef carcasses than either of the indirect technologies, prototype BeefCam, or colorimeter, particularly for USDA Select carcasses. As tested in this study, slice shear force, but not the prototype BeefCam or colorimeter systems, accurately identified "tender" beef.
VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems
NASA Technical Reports Server (NTRS)
Gee, T. H.; Geist, J. M.
1973-01-01
Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.
Multistage classification of multispectral Earth observational data: The design approach
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Muasher, M. J.; Landgrebe, D. A.
1981-01-01
An algorithm is proposed which predicts the optimal features at every node in a binary tree procedure. The algorithm estimates the probability of error by approximating the area under the likelihood ratio function for two classes and taking into account the number of training samples used in estimating each of these two classes. Some results on feature selection techniques, particularly in the presence of a very limited set of training samples, are presented. Results comparing probabilities of error predicted by the proposed algorithm as a function of dimensionality as compared to experimental observations are shown for aircraft and LANDSAT data. Results are obtained for both real and simulated data. Finally, two binary tree examples which use the algorithm are presented to illustrate the usefulness of the procedure.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Do juries meet our expectations?
Arkes, Hal R; Mellers, Barbara A
2002-12-01
Surveys of public opinion indicate that people have high expectations for juries. When it comes to serious crimes, most people want errors of convicting the innocent (false positives) or acquitting the guilty (false negatives) to fall well below 10%. Using expected utility theory, Bayes' Theorem, signal detection theory, and empirical evidence from detection studies of medical decision making, eyewitness testimony, and weather forecasting, we argue that the frequency of mistakes probably far exceeds these "tolerable" levels. We are not arguing against the use of juries. Rather, we point out that a closer look at jury decisions reveals a serious gap between what we expect from juries and what probably occurs. When deciding issues of guilt and/or punishing convicted criminals, we as a society should recognize and acknowledge the abundance of error.
Heuristics and Cognitive Error in Medical Imaging.
Itri, Jason N; Patel, Sohil H
2018-05-01
The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
Bernard, A M; Burgot, J L
1981-12-01
The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.
"The Errors Were the Results of Errors": Promoting Good Writing by Bad Example
ERIC Educational Resources Information Center
Korsunsky, Boris
2010-01-01
We learn best by example--this adage is probably as old as teaching itself. In my own classroom, I have found that very often the students learn best from the "negative" examples. Perhaps, this shouldn't come as a surprise at all. After all, we don't react strongly to the norm--but an obvious deviation from the norm may attract our attention and…
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less
NASA Technical Reports Server (NTRS)
Wiese, Wolfgang L.; Fuhr, J. R.
2006-01-01
We have undertaken new critical assessments and tabulations of the transition probabilities of important lines of these spectra. For Fe I and Fe II, we have carried out a complete re-assessment and update, and we have relied almost exclusively on the literature of the last 15 years. Our updates for C I, C II and N I, N II primarily address the persistent lower transitions as well as a greatly expanded number of forbidden lines (M1, M2, and E2). For these transitions, sophisticated multiconfiguration Hartree-Fock (MCHF) calculations have been recently carried out, which have yielded data considerably improved and often appreciably different from our 1996 NIST compilation.
An investigation into the effect of type I and type II diabetes duration on employment and wages.
Minor, Travis
2013-12-01
Using data from the National Longitudinal Survey of Youth 1979, the current study examines the effect of type I and type II diabetes on employment status and wages. The results suggest that both the probability of employment and wages are negatively related to the number of years since the initial diagnosis of diabetes. Moreover, the effect of diabetes duration on the probability of employment appears to be nonlinear, peaking around 16 years for females and 10 years for males. A similar negative effect on wages is found only in male diabetics. Finally, the results suggest that failure to distinguish between type I and type II diabetics may lead to some counterintuitive results. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, R.; Mondot, J.; Stankovski, Z.
1988-11-01
APOLLO II is a new, multigroup transport code under development at the Commissariat a l'Energie Atomique. The code has a modular structure and uses sophisticated software for data structuralization, dynamic memory management, data storage, and user macrolanguage. This paper gives an overview of the main methods used in the code for (a) multidimensional collision probability calculations, (b) leakage calculations, and (c) homogenization procedures. Numerical examples are given to demonstrate the potential of the modular structure of the code and the novel multilevel flat-flux representation used in the calculation of the collision probabilities.
Estimating the Probability of a Diffusing Target Encountering a Stationary Sensor.
1985-07-01
7 RD-R1577 6- 44 ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET i/i ENCOUNTERING R STATIONARY SENSOR(U) NAVAL POSTGRADUATE U SCHOOL MONTEREY CA...8217,: *.:.; - -*.. ,’.-,:;;’.’.. ’,. ,. .*.’.- 4 6 6- ..- .-,,.. : .-.;.- -. NPS55-85-013 NAVAL POSTGRADUATE SCHOOL Monterey, California ESTIMATING THE PROBABILITY OF A DIFFUSING TARGET...PROBABILITY OF A DIFFUSING Technical TARGET ENCOUNTERING A STATIONARY SENSOR S. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(@) S. CONTRACT OR GRANT NUMBER(a
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Analysis of error type and frequency in apraxia of speech among Portuguese speakers.
Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo
2010-01-01
Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.
Probability theory, not the very guide of life.
Juslin, Peter; Nilsson, Håkan; Winman, Anders
2009-10-01
Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1990-01-01
An expurgated upper bound on the event error probability of trellis coded modulation is presented. This bound is used to derive a lower bound on the minimum achievable free Euclidean distance d sub (free) of trellis codes. It is shown that the dominant parameters for both bounds, the expurgated error exponent and the asymptotic d sub (free) growth rate, respectively, can be obtained from the cutoff-rate R sub O of the transmission channel by a simple geometric construction, making R sub O the central parameter for finding good trellis codes. Several constellations are optimized with respect to the bounds.
Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.
Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael
2012-12-01
A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less