1983-03-01
AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Type I and Type II error concerns in fMRI research: re-balancing the scale
Cunningham, William A.
2009-01-01
Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke
2011-08-01
TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
U.S. Coast Guard SARSAT Final Evaluation Report. Volume II. Appendices.
DOT National Transportation Integrated Search
1987-03-01
Contents: Controlled Tests; Controlled Test Error Analysis, Processing of Westwind Data; Exercises and Homing Tests; Further Analysis of Controlled Tests; Sar Case Analysis Tables; Narratives of Real Distress Cases; RCC Response Scenarios; Workload A...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...
Online beam energy measurement of Beijing electron positron collider II linear accelerator
NASA Astrophysics Data System (ADS)
Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Online beam energy measurement of Beijing electron positron collider II linear accelerator.
Wang, S; Iqbal, M; Liu, R; Chi, Y
2016-02-01
This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.
Strategic planning to reduce medical errors: Part I--diagnosis.
Waldman, J Deane; Smith, Howard L
2012-01-01
Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.
Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.
2018-05-01
Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.
Lattice Commissioning Stretgy Simulation for the B Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.; Whittum, D.; Yan, Y.
2011-08-26
To prepare for the PEP-II turn on, we have studied one commissioning strategy with simulated lattice errors. Features such as difference and absolute orbit analysis and correction are discussed. To prepare for the commissioning of the PEP-II injection line and high energy ring (HER), we have developed a system for on-line orbit analysis by merging two existing codes: LEGO and RESOLVE. With the LEGO-RESOLVE system, we can study the problem of finding quadrupole alignment and beam position (BPM) offset errors with simulated data. We have increased the speed and versatility of the orbit analysis process by using a command filemore » written in a script language designed specifically for RESOLVE. In addition, we have interfaced the LEGO-RESOLVE system to the control system of the B-Factory. In this paper, we describe online analysis features of the LEGO-RESOLVE system and present examples of practical applications.« less
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng; Aluyor, E.; Audu, T.
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used tomore » predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.« less
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
NASA Astrophysics Data System (ADS)
Oboh, I.; Aluyor, E.; Audu, T.
2015-03-01
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R2), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Handschuh, R. F.; Zhang, J.
1988-01-01
A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.
The relationship between somatic and cognitive-affective depression symptoms and error-related ERP’s
Bridwell, David A.; Steele, Vaughn R.; Maurer, J. Michael; Kiehl, Kent A.; Calhoun, Vince D.
2014-01-01
Background The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck’s Depression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Methods Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERP’s were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. Results We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). Limitations These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. Conclusions These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. PMID:25451400
The relationship between somatic and cognitive-affective depression symptoms and error-related ERPs.
Bridwell, David A; Steele, Vaughn R; Maurer, J Michael; Kiehl, Kent A; Calhoun, Vince D
2015-02-01
The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck'sDepression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERPs were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. Copyright © 2014 Elsevier B.V. All rights reserved.
Alderete, John; Davies, Monica
2018-04-01
This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.
Evaluation and error apportionment of an ensemble of ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact
Error of the slanted edge method for measuring the modulation transfer function of imaging systems.
Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu
2018-03-01
The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.
Analysis of Solar Spectral Irradiance Measurements from the SBUV/2-Series and the SSBUV Instruments
NASA Technical Reports Server (NTRS)
Cebula, Richard P.; DeLand, Matthew T.; Hilsenrath, Ernest
1997-01-01
During this period of performance, 1 March 1997 - 31 August 1997, the NOAA-11 SBUV/2 solar spectral irradiance data set was validated using both internal and external assessments. Initial quality checking revealed minor problems with the data (e.g. residual goniometric errors, that were manifest as differences between the two scans acquired each day). The sources of these errors were determined and the errors were corrected. Time series were constructed for selected wavelengths and the solar irradiance changes measured by the instrument were compared to a Mg II proxy-based model of short- and long-term solar irradiance variations. This analysis suggested that errors due to residual, uncorrected long-term instrument drift have been reduced to less than 1-2% over the entire 5.5 year NOAA-11 data record. Detailed statistical analysis was performed. This analysis, which will be documented in a manuscript now in preparation, conclusively demonstrates the evolution of solar rotation periodicity and strength during solar cycle 22.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...
Image defects from surface and alignment errors in grazing incidence telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.
1989-01-01
The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.
AQMEII3: the EU and NA regional scale program of the ...
The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Impact of the HERA I+II combined data on the CT14 QCD global analysis
NASA Astrophysics Data System (ADS)
Dulat, S.; Hou, T.-J.; Gao, J.; Guzzi, M.; Huston, J.; Nadolsky, P.; Pumplin, J.; Schmidt, C.; Stump, D.; Yuan, C.-P.
2016-11-01
A brief description of the impact of the recent HERA run I+II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of PDFs is given. The new CT14HERA2 PDFs at NLO and NNLO are illustrated. They employ the same parametrization used in the CT14 analysis, but with an additional shape parameter for describing the strange quark PDF. The HERA I+II data are reasonably well described by both CT14 and CT14HERA2 PDFs, and differences are smaller than the PDF uncertainties of the standard CT14 analysis. Both sets are acceptable when the error estimates are calculated in the CTEQ-TEA (CT) methodology and the standard CT14 PDFs are recommended to be continuously used for the analysis of LHC measurements.
Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.
Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth
2016-06-01
Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.
AQMEII3 evaluation of regional NA/EU simulations and ...
Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac
Oddou-Muratorio, S; Houot, M-L; Demesure-Musch, B; Austerlitz, F
2003-12-01
The joint development of polymorphic molecular markers and paternity analysis methods provides new approaches to investigate ongoing patterns of pollen flow in natural plant populations. However, paternity studies are hindered by false paternity assignment and the nondetection of true fathers. To gauge the risk of these two types of errors, we performed a simulation study to investigate the impact on paternity analysis of: (i) the assumed values for the size of the breeding male population (NBMP), and (ii) the rate of scoring error in genotype assessment. Our simulations were based on microsatellite data obtained from a natural population of the entomophilous wild service tree, Sorbus torminalis (L.) Crantz. We show that an accurate estimate of NBMP is required to minimize both types of errors, and we assess the reliability of a technique used to estimate NBMP based on parent-offspring genetic data. We then show that scoring errors in genotype assessment only slightly affect the assessment of paternity relationships, and conclude that it is generally better to neglect the scoring error rate in paternity analyses within a nonisolated population.
Schenk, Emily R; Almirall, José R
2012-04-10
The elemental analysis of glass evidence has been established as a powerful discrimination tool for forensic analysts. Laser ablation inductively coupled plasma optical emission spectrometry (LA-ICP-OES) has been compared to laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and energy dispersive micro X-ray fluorescence spectroscopy (μXRF/EDS) as competing instrumentation for the elemental analysis of glass. The development of a method for the forensic analysis of glass coupling laser ablation to ICP-OES is presented for the first time. LA-ICP-OES has demonstrated comparable analytical performance to LA-ICP-MS based on the use of the element menu, Al (Al I 396.15 nm), Ba (Ba II 455.40 nm), Ca (Ca II 315.88 nm), Fe (Fe II 238.20 nm), Li (Li I 670.78 nm), Mg (Mg I 285.21 nm), Sr (Sr II 407.77 nm), Ti (Ti II 368.51 nm), and Zr (Zr II 343.82 nm). The relevant figures of merit, such as precision, accuracy and sensitivity, are presented and compared to LA-ICP-MS. A set of 41 glass samples was used to assess the discrimination power of the LA-ICP-OES method in comparison to other elemental analysis techniques. This sample set consisted of several vehicle glass samples that originated from the same source (inside and outside windshield panes) and several glass samples that originated from different vehicles. Different match criteria were used and compared to determine the potential for Type I and Type II errors. It was determined that broader match criteria is more applicable to the forensic comparison of glass analysis because it can reduce the affect that micro-heterogeneity inherent in the glass fragments and a less than ideal sampling strategy can have on the interpretation of the results. Based on the test set reported here, a plus or minus four standard deviation (± 4s) match criterion yielded the lowest possibility of Type I and Type II errors. The developed LA-ICP-OES method has been shown to perform similarly to LA-ICP-MS in the discrimination among different sources of glass while offering the advantages of a lower cost of acquisition and operation of analytical instrumentation making ICP-OES a possible alternative elemental analysis method for the forensic laboratory. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
Cheng, Jin-Mei; Li, Jian; Tang, Ji-Xin; Hao, Xiao-Xia; Wang, Zhi-Peng; Sun, Tie-Cheng; Wang, Xiu-Xia; Zhang, Yan; Chen, Su-Ren; Liu, Yi-Xun
2017-08-03
Mammalian oocyte chromosomes undergo 2 meiotic divisions to generate haploid gametes. The frequency of chromosome segregation errors during meiosis I increase with age. However, little attention has been paid to the question of how aging affects sister chromatid segregation during oocyte meiosis II. More importantly, how aneuploid metaphase II (MII) oocytes from aged mice evade the spindle assembly checkpoint (SAC) mechanism to complete later meiosis II to form aneuploid embryos remains unknown. Here, we report that MII oocytes from naturally aged mice exhibited substantial errors in chromosome arrangement and configuration compared with young MII oocytes. Interestingly, these errors in aged oocytes had no impact on anaphase II onset and completion as well as 2-cell formation after parthenogenetic activation. Further study found that merotelic kinetochore attachment occurred more frequently and could stabilize the kinetochore-microtubule interaction to ensure SAC inactivation and anaphase II onset in aged MII oocytes. This orientation could persist largely during anaphase II in aged oocytes, leading to severe chromosome lagging and trailing as well as delay of anaphase II completion. Therefore, merotelic kinetochore attachment in oocyte meiosis II exacerbates age-related genetic instability and is a key source of age-dependent embryo aneuploidy and dysplasia.
Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J.P.C.; Helton, J.C.
1999-04-01
The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2013 CFR
2013-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2014 CFR
2014-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
10 CFR 74.45 - Measurements and measurement control.
Code of Federal Regulations, 2012 CFR
2012-01-01
... measurements, obtaining samples, and performing laboratory analyses for element concentration and isotope... of random error behavior. On a predetermined schedule, the program shall include, as appropriate: (i) Replicate analyses of individual samples; (ii) Analysis of replicate process samples; (iii) Replicate volume...
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
Lystrom, David J.
1972-01-01
Various methods of verifying real-time streamflow data are outlined in part II. Relatively large errors (those greater than 20-30 percent) can be detected readily by use of well-designed verification programs for a digital computer, and smaller errors can be detected only by discharge measurements and field observations. The capability to substitute a simulated discharge value for missing or erroneous data is incorporated in some of the verification routines described. The routines represent concepts ranging from basic statistical comparisons to complex watershed modeling and provide a selection from which real-time data users can choose a suitable level of verification.
Silva, Felipe O.; Hemerly, Elder M.; Leite Filho, Waldemar C.
2017-01-01
This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions. PMID:28241494
DOE Office of Scientific and Technical Information (OSTI.GOV)
Songaila, A.; Cowie, L. L., E-mail: acowie@ifa.hawaii.edu
2014-10-01
The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure inmore » even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of unambiguously detecting variation in α using the MM method.« less
NASA Technical Reports Server (NTRS)
Chu, W. P.; Chiou, E. W.; Larsen, J. C.; Thomason, L. W.; Rind, D.; Buglia, J. J.; Oltmans, S.; Mccormick, M. P.; Mcmaster, L. M.
1993-01-01
The operational inversion algorithm used for the retrieval of the water-vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data is presented. Unlike the algorithm used for the retrieval of aerosol, O3, and NO2, the water-vapor retrieval algorithm accounts for the nonlinear relationship between the concentration versus the broad-band absorption characteristics of water vapor. Problems related to the accuracy of the computational scheme, the accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile are examined. Results are presented on the error analysis of the SAGE II water vapor retrieval, indicating that the SAGE II instrument produced good quality water vapor data.
2013-03-01
layering and typing to provide a vertical stratification of the cloud-filled pixels detected in Level 2. Level 3 output is remapped to the standard AFWA...analyses are compared to one another to see if the most recent analysis also has the lowest estimated error. Optimum interpolation (OI) occurs when...NORTHERN HEMISPHERE MERGED CLOUD ANALYSES FROM THE UNITED STATES AIR FORCE CLOUD DEPICTION FORECASTING SYSTEM II by Chandra M. Pasillas March
Morrison, Maeve; Cope, Vicki; Murray, Melanie
2018-05-15
Medication errors remain a commonly reported clinical incident in health care as highlighted by the World Health Organization's focus to reduce medication-related harm. This retrospective quantitative analysis examined medication errors reported by staff using an electronic Clinical Incident Management System (CIMS) during a 3-year period from April 2014 to April 2017 at a metropolitan mental health ward in Western Australia. The aim of the project was to identify types of medication errors and the context in which they occur and to consider recourse so that medication errors can be reduced. Data were retrieved from the Clinical Incident Management System database and concerned medication incidents from categorized tiers within the system. Areas requiring improvement were identified, and the quality of the documented data captured in the database was reviewed for themes pertaining to medication errors. Content analysis provided insight into the following issues: (i) frequency of problem, (ii) when the problem was detected, and (iii) characteristics of the error (classification of drug/s, where the error occurred, what time the error occurred, what day of the week it occurred, and patient outcome). Data were compared to the state-wide results published in the Your Safety in Our Hands (2016) report. Results indicated several areas upon which quality improvement activities could be focused. These include the following: structural changes; changes to policy and practice; changes to individual responsibilities; improving workplace culture to counteract underreporting of medication errors; and improvement in safety and quality administration of medications within a mental health setting. © 2018 Australian College of Mental Health Nurses Inc.
Alastruey, Jordi; Hunt, Anthony A E; Weinberg, Peter D
2014-01-01
We present a novel analysis of arterial pulse wave propagation that combines traditional wave intensity analysis with identification of Windkessel pressures to account for the effect on the pressure waveform of peripheral wave reflections. Using haemodynamic data measured in vivo in the rabbit or generated numerically in models of human compliant vessels, we show that traditional wave intensity analysis identifies the timing, direction and magnitude of the predominant waves that shape aortic pressure and flow waveforms in systole, but fails to identify the effect of peripheral reflections. These reflections persist for several cardiac cycles and make up most of the pressure waveform, especially in diastole and early systole. Ignoring peripheral reflections leads to an erroneous indication of a reflection-free period in early systole and additional error in the estimates of (i) pulse wave velocity at the ascending aorta given by the PU–loop method (9.5% error) and (ii) transit time to a dominant reflection site calculated from the wave intensity profile (27% error). These errors decreased to 1.3% and 10%, respectively, when accounting for peripheral reflections. Using our new analysis, we investigate the effect of vessel compliance and peripheral resistance on wave intensity, peripheral reflections and reflections originating in previous cardiac cycles. PMID:24132888
Overview of ATMT and Analysis of Subphase IIB
1977-12-01
relationships among maximum gunner error and target, vehicle motion. 1 INC LAS SI I F D I II TechidTfa-F ’•-RpoiTLTR-:7- 77 becemboer 1977 Directorate of...of- si ght interruptions, s ijrii i cant. difficulties weri etrcuo (t’red early in the effort to dioitize the anal uj iunnonr error dtta from thi s...and is classified SECRET. h. Phone I si •Jiu, ry. (I) The purpose of the Pha,ne I effort was to identify an array of co(.ddidate Lafneuvern ho b u.,ed
Objective Analysis of Oceanic Data for Coast Guard Trajectory Models Phase II
1997-12-01
as outliers depends on the desired probability of false alarm, Pfa values, which is the probability of marking a valid point as an outlier. Table 2-2...constructed to minimize the mean-squared prediction error of the grid point estimate under the constraint that the estimate is unbiased . The...prediction error, e= Zl(S) _oizl(Si)+oC1iZz(S) (2.44) subject to the constraints of unbiasedness , • c/1 = 1,and (2.45) i SCC12 = 0. (2.46) Denoting
"Bed Side" Human Milk Analysis in the Neonatal Intensive Care Unit: A Systematic Review.
Fusch, Gerhard; Kwan, Celia; Kotrri, Gynter; Fusch, Christoph
2017-03-01
Human milk analyzers can measure macronutrient content in native breast milk to tailor adequate supplementation with fortifiers. This article reviews all studies using milk analyzers, including (i) evaluation of devices, (ii) the impact of different conditions on the macronutrient analysis of human milk, and (iii) clinical trials to improve growth. Results lack consistency, potentially due to systematic errors in the validation of the device, or pre-analytical sample preparation errors like homogenization. It is crucial to introduce good laboratory and clinical practice when using these devices; otherwise a non-validated clinical usage can severely affect growth outcomes of infants. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysis of estimation algorithms for CDTI and CAS applications
NASA Technical Reports Server (NTRS)
Goka, T.
1985-01-01
Estimation algorithms for Cockpit Display of Traffic Information (CDTI) and Collision Avoidance System (CAS) applications were analyzed and/or developed. The algorithms are based on actual or projected operational and performance characteristics of an Enhanced TCAS II traffic sensor developed by Bendix and the Federal Aviation Administration. Three algorithm areas are examined and discussed. These are horizontal x and y, range and altitude estimation algorithms. Raw estimation errors are quantified using Monte Carlo simulations developed for each application; the raw errors are then used to infer impacts on the CDTI and CAS applications. Applications of smoothing algorithms to CDTI problems are also discussed briefly. Technical conclusions are summarized based on the analysis of simulation results.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
The NASA F-15 Intelligent Flight Control Systems: Generation II
NASA Technical Reports Server (NTRS)
Buschbacher, Mark; Bosworth, John
2006-01-01
The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.
A two-point diagnostic for the H II galaxy Hubble diagram
NASA Astrophysics Data System (ADS)
Leaf, Kyle; Melia, Fulvio
2018-03-01
A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.
Low power and type II errors in recent ophthalmology research.
Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P
2016-10-01
To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Christiansen, Mark P; Klaff, Leslie J; Brazg, Ronald; Chang, Anna R; Levy, Carol J; Lam, David; Denham, Douglas S; Atiee, George; Bode, Bruce W; Walters, Steven J; Kelley, Lynne; Bailey, Timothy S
2018-03-01
Persistent use of real-time continuous glucose monitoring (CGM) improves diabetes control in individuals with type 1 diabetes (T1D) and type 2 diabetes (T2D). PRECISE II was a nonrandomized, blinded, prospective, single-arm, multicenter study that evaluated the accuracy and safety of the implantable Eversense CGM system among adult participants with T1D and T2D (NCT02647905). The primary endpoint was the mean absolute relative difference (MARD) between paired Eversense and Yellow Springs Instrument (YSI) reference measurements through 90 days postinsertion for reference glucose values from 40 to 400 mg/dL. Additional endpoints included Clarke Error Grid analysis and sensor longevity. The primary safety endpoint was the incidence of device-related or sensor insertion/removal procedure-related serious adverse events (SAEs) through 90 days postinsertion. Ninety participants received the CGM system. The overall MARD value against reference glucose values was 8.8% (95% confidence interval: 8.1%-9.3%), which was significantly lower than the prespecified 20% performance goal for accuracy (P < 0.0001). Ninety-three percent of CGM values were within 20/20% of reference values over the total glucose range of 40-400 mg/dL. Clarke Error Grid analysis showed 99.3% of samples in the clinically acceptable error zones A (92.8%) and B (6.5%). Ninety-one percent of sensors were functional through day 90. One related SAE (1.1%) occurred during the study for removal of a sensor. The PRECISE II trial demonstrated that the Eversense CGM system provided accurate glucose readings through the intended 90-day sensor life with a favorable safety profile.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
Near Misses in Financial Trading: Skills for Capturing and Averting Error.
Leaver, Meghan; Griffiths, Alex; Reader, Tom
2018-05-01
The aims of this study were (a) to determine whether near-miss incidents in financial trading contain information on the operator skills and systems that detect and prevent near misses and the patterns and trends revealed by these data and (b) to explore if particular operator skills and systems are found as important for avoiding particular types of error on the trading floor. In this study, we examine a cohort of near-miss incidents collected from a financial trading organization using the Financial Incident Analysis System and report on the nontechnical skills and systems that are used to detect and prevent error in this domain. One thousand near-miss incidents are analyzed using distribution, mean, chi-square, and associative analysis to describe the data; reliability is provided. Slips/lapses (52%) and human-computer interface problems (21%) often occur alone and are the main contributors to error causation, whereas the prevention of error is largely a result of teamwork (65%) and situation awareness (46%) skills. No matter the cause of error, situation awareness and teamwork skills are used most often to detect and prevent the error. Situation awareness and teamwork skills appear universally important as a "last line" of defense for capturing error, and data from incident-monitoring systems can be analyzed in a fashion more consistent with a "Safety-II" approach. This research provides data for ameliorating risk within financial trading organizations, with implications for future risk management programs and regulation.
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
Analysis for nickel (3 and 4) in positive plates from nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Lewis, Harlan L.
1994-01-01
The NASA-Goddard procedure for destructive physical analysis (DPA) of nickel-cadmium cells contains a method for analysis of residual charged nickel as NiOOH in the positive plates at complete cell discharge, also known as nickel precharge. In the method, the Ni(III) is treated with an excess of an Fe(II) reducing agent and then back titrated with permanganate. The Ni(III) content is the difference between Fe(II) equivalents and permanganate equivalents. Problems have arisen in analysis at NAVSURFWARCENDIV, Crane because for many types of cells, particularly AA-size and some 'space-qualified' cells, zero or negative Ni(III) contents are recorded for which the manufacturer claims 3-5 percent precharge. Our approach to this problem was to reexamine the procedure for the source of error, and correct it or develop an alternative method.
RF transient analysis and stabilization of the phase and energy of the proposed PIP-II LINAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of RF transients and their compensation in an H- linear accelerator. Existing tools in this area either focus on electron LINACs or lack fundamental details about the LLRF system that are necessary to provide realistic performance estimates. In our paper we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the PIP-II LINAC, followed by an analysis of calibration errors and how amore » Newton’s Method based feedback scheme can be used to regulate the beam energy to within the specified limits.« less
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2010 CFR
2010-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
49 CFR Appendix F to Part 240 - Medical Standards Guidelines
Code of Federal Regulations, 2011 CFR
2011-10-01
... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...
NASA Astrophysics Data System (ADS)
Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.
2016-06-01
This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.
On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator
Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B.; van Dieën, Jaap H.
2016-01-01
Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation. PMID:27834911
On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.
Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H
2016-11-10
Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.
The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...
NASA Astrophysics Data System (ADS)
Yang, Jing; Reichert, Peter; Abbaspour, Karim C.; Yang, Hong
2007-07-01
SummaryCalibration of hydrologic models is very difficult because of measurement errors in input and response, errors in model structure, and the large number of non-identifiable parameters of distributed models. The difficulties even increase in arid regions with high seasonal variation of precipitation, where the modelled residuals often exhibit high heteroscedasticity and autocorrelation. On the other hand, support of water management by hydrologic models is important in arid regions, particularly if there is increasing water demand due to urbanization. The use and assessment of model results for this purpose require a careful calibration and uncertainty analysis. Extending earlier work in this field, we developed a procedure to overcome (i) the problem of non-identifiability of distributed parameters by introducing aggregate parameters and using Bayesian inference, (ii) the problem of heteroscedasticity of errors by combining a Box-Cox transformation of results and data with seasonally dependent error variances, (iii) the problems of autocorrelated errors, missing data and outlier omission with a continuous-time autoregressive error model, and (iv) the problem of the seasonal variation of error correlations with seasonally dependent characteristic correlation times. The technique was tested with the calibration of the hydrologic sub-model of the Soil and Water Assessment Tool (SWAT) in the Chaohe Basin in North China. The results demonstrated the good performance of this approach to uncertainty analysis, particularly with respect to the fulfilment of statistical assumptions of the error model. A comparison with an independent error model and with error models that only considered a subset of the suggested techniques clearly showed the superiority of the approach based on all the features (i)-(iv) mentioned above.
Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M
2017-04-01
A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Bae, Hyoung Won; Lee, Yun Ha; Kim, Do Wook; Lee, Taekjune; Hong, Samin; Seong, Gong Je; Kim, Chan Yun
2016-08-01
The objective of the study is to examine the effect of trabeculectomy on intraocular lens power calculations in patients with open-angle glaucoma (OAG) undergoing cataract surgery. The design is retrospective data analysis. There are a total of 55 eyes of 55 patients with OAG who had a cataract surgery alone or in combination with trabeculectomy. We classified OAG subjects into the following groups based on surgical history: only cataract surgery (OC group), cataract surgery after prior trabeculectomy (CAT group), and cataract surgery performed in combination with trabeculectomy (CCT group). Differences between actual and predicted postoperative refractive error. Mean error (ME, difference between postoperative and predicted SE) in the CCT group was significantly lower (towards myopia) than that of the OC group (P = 0.008). Additionally, mean absolute error (MAE, absolute value of ME) in the CAT group was significantly greater than in the OC group (P = 0.006). Using linear mixed models, the ME calculated with the SRK II formula was more accurate than the ME predicted by the SRK T formula in the CAT (P = 0.032) and CCT (P = 0.035) groups. The intraocular lens power prediction accuracy was lower in the CAT and CCT groups than in the OC group. The prediction error was greater in the CAT group than in the OC group, and the direction of the prediction error tended to be towards myopia in the CCT group. The SRK II formula may be more accurate in predicting residual refractive error in the CAT and CCT groups. © 2016 Royal Australian and New Zealand College of Ophthalmologists.
Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.
Yau, Joanna Oi-Yue; McNally, Gavan P
2015-01-07
Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
Goodman, Stephen I; Binard, Robert J; Woontner, Michael R; Frerman, Frank E
2002-01-01
Glutaric acidemia type II is a human inborn error of metabolism which can be due to defects in either subunit of electron transfer flavoprotein (ETF) or in ETF:ubiquinone oxidoreductase (ETF:QO), but few disease-causing mutations have been described. The ETF:QO gene is located on 4q33, and contains 13 exons. Primers to amplify these exons are presented, together with mutations identified by molecular analysis of 20 ETF:QO-deficient patients. Twenty-one different disease-causing mutations were identified on 36 of the 40 chromosomes.
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
CTEQ-TEA parton distribution functions and HERA Run I and II combined data
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Dulat, Sayipjamal; Gao, Jun; Guzzi, Marco; Huston, Joey; Nadolsky, Pavel; Pumplin, Jon; Schmidt, Carl; Stump, Daniel; Yuan, C.-P.
2017-02-01
We analyze the impact of the recent HERA Run I +II combination of inclusive deep inelastic scattering cross-section data on the CT14 global analysis of parton distribution functions (PDFs). New PDFs at next-to-leading order and next-to-next-to-leading order, called CT14 HERA 2 , are obtained by a refit of the CT14 data ensembles, in which the HERA Run I combined measurements are replaced by the new HERA Run I +II combination. The CT14 functional parametrization of PDFs is flexible enough to allow good descriptions of different flavor combinations, so we use the same parametrization for CT14 HERA 2 but with an additional shape parameter for describing the strange quark PDF. We find that the HERA I +II data can be fit reasonably well, and both CT14 and CT14 HERA 2 PDFs can describe equally well the non-HERA data included in our global analysis. Because the CT14 and CT14 HERA 2 PDFs agree well within the PDF errors, we continue to recommend CT14 PDFs for the analysis of LHC Run 2 experiments.
Living systematic reviews: 3. Statistical methods for updating meta-analyses.
Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian
2017-11-01
A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wang, Lin
The literature is reviewed regarding the difference between planned contrasts, OVA and unplanned contrasts. The relationship between statistical power of a test method and Type I, Type II error rates is first explored to provide a framework for the discussion. The concepts and formulation of contrast, orthogonal and non-orthogonal contrasts are…
Ying, Gui-shuang; Maguire, Maureen; Quinn, Graham; Kulp, Marjean Taylor; Cyert, Lynn
2011-12-28
To evaluate, by receiver operating characteristic (ROC) analysis, the accuracy of three instruments of refractive error in detecting eye conditions among 3- to 5-year-old Head Start preschoolers and to evaluate differences in accuracy between instruments and screeners and by age of the child. Children participating in the Vision In Preschoolers (VIP) Study (n = 4040), had screening tests administered by pediatric eye care providers (phase I) or by both nurse and lay screeners (phase II). Noncycloplegic retinoscopy (NCR), the Retinomax Autorefractor (Nikon, Tokyo, Japan), and the SureSight Vision Screener (SureSight, Alpharetta, GA) were used in phase I, and Retinomax and SureSight were used in phase II. Pediatric eye care providers performed a standardized eye examination to identify amblyopia, strabismus, significant refractive error, and reduced visual acuity. The accuracy of the screening tests was summarized by the area under the ROC curve (AUC) and compared between instruments and screeners and by age group. The three screening tests had a high AUC for all categories of screening personnel. The AUC for detecting any VIP-targeted condition was 0.83 for NCR, 0.83 (phase I) to 0.88 (phase II) for Retinomax, and 0.86 (phase I) to 0.87 (phase II) for SureSight. The AUC was 0.93 to 0.95 for detecting group 1 (most severe) conditions and did not differ between instruments or screeners or by age of the child. NCR, Retinomax, and SureSight had similar and high accuracy in detecting vision disorders in preschoolers across all types of screeners and age of child, consistent with previously reported results at specificity levels of 90% and 94%.
NASA Astrophysics Data System (ADS)
Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian
2017-03-01
Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I + II + III (FI + II + III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (Rp2), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501 g/L, 0.465 g/L and 5.57 for TP, and 0.969, 0.530 g/L, 0.341 g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI + II + III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS.
Li, Can; Wang, Fei; Zang, Lixuan; Zang, Hengchang; Alcalà, Manel; Nie, Lei; Wang, Mingyu; Li, Lian
2017-03-15
Nowadays, as a powerful process analytical tool, near infrared spectroscopy (NIRS) has been widely applied in process monitoring. In present work, NIRS combined with multivariate analysis was used to monitor the ethanol precipitation process of fraction I+II+III (FI+II+III) supernatant in human albumin (HA) separation to achieve qualitative and quantitative monitoring at the same time and assure the product's quality. First, a qualitative model was established by using principal component analysis (PCA) with 6 of 8 normal batches samples, and evaluated by the remaining 2 normal batches and 3 abnormal batches. The results showed that the first principal component (PC1) score chart could be successfully used for fault detection and diagnosis. Then, two quantitative models were built with 6 of 8 normal batches to determine the content of the total protein (TP) and HA separately by using partial least squares regression (PLS-R) strategy, and the models were validated by 2 remaining normal batches. The determination coefficient of validation (R p 2 ), root mean square error of cross validation (RMSECV), root mean square error of prediction (RMSEP) and ratio of performance deviation (RPD) were 0.975, 0.501g/L, 0.465g/L and 5.57 for TP, and 0.969, 0.530g/L, 0.341g/L and 5.47 for HA, respectively. The results showed that the established models could give a rapid and accurate measurement of the content of TP and HA. The results of this study indicated that NIRS is an effective tool and could be successfully used for qualitative and quantitative monitoring the ethanol precipitation process of FI+II+III supernatant simultaneously. This research has significant reference value for assuring the quality and improving the recovery ratio of HA in industrialization scale by using NIRS. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.
2002-01-01
Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…
Errors in nonword repetition: bridging short- and long-term memory.
Santos, F H; Bueno, O F A; Gathercole, S E
2006-03-01
According to the working memory model, the phonological loop is the component of working memory specialized in processing and manipulating limited amounts of speech-based information. The Children's Test of Nonword Repetition (CNRep) is a suitable measure of phonological short-term memory for English-speaking children, which was validated by the Brazilian Children's Test of Pseudoword Repetition (BCPR) as a Portuguese-language version. The objectives of the present study were: i) to investigate developmental aspects of the phonological memory processing by error analysis in the nonword repetition task, and ii) to examine phoneme (substitution, omission and addition) and order (migration) errors made in the BCPR by 180 normal Brazilian children of both sexes aged 4-10, from preschool to 4th grade. The dominant error was substitution [F(3,525) = 180.47; P < 0.0001]. The performance was age-related [F(4,175) = 14.53; P < 0.0001]. The length effect, i.e., more errors in long than in short items, was observed [F(3,519) = 108.36; P < 0.0001]. In 5-syllable pseudowords, errors occurred mainly in the middle of the stimuli, before the syllabic stress [F(4,16) = 6.03; P = 0.003]; substitutions appeared more at the end of the stimuli, after the stress [F(12,48) = 2.27; P = 0.02]. In conclusion, the BCPR error analysis supports the idea that phonological loop capacity is relatively constant during development, although school learning increases the efficiency of this system. Moreover, there are indications that long-term memory contributes to holding memory trace. The findings were discussed in terms of distinctiveness, clustering and redintegration hypotheses.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-04-01
The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
An introduction to multiplicity issues in clinical trials: the what, why, when and how.
Li, Guowei; Taljaard, Monica; Van den Heuvel, Edwin R; Levine, Mitchell Ah; Cook, Deborah J; Wells, George A; Devereaux, Philip J; Thabane, Lehana
2017-04-01
In clinical trials it is not uncommon to face a multiple testing problem which can have an impact on both type I and type II error rates, leading to inappropriate interpretation of trial results. Multiplicity issues may need to be considered at the design, analysis and interpretation stages of a trial. The proportion of trial reports not adequately correcting for multiple testing remains substantial. The purpose of this article is to provide an introduction to multiple testing issues in clinical trials, and to reduce confusion around the need for multiplicity adjustments. We use a tutorial, question-and-answer approach to address the key issues of why, when and how to consider multiplicity adjustments in trials. We summarize the relevant circumstances under which multiplicity adjustments ought to be considered, as well as options for carrying out multiplicity adjustments in terms of trial design factors including Population, Intervention/Comparison, Outcome, Time frame and Analysis (PICOTA). Results are presented in an easy-to-use table and flow diagrams. Confusion about multiplicity issues can be reduced or avoided by considering the potential impact of multiplicity on type I and II errors and, if necessary pre-specifying statistical approaches to either avoid or adjust for multiplicity in the trial protocol or analysis plan. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
[Practical aspects regarding sample size in clinical research].
Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S
1996-01-01
The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.
Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.
Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N
2016-01-01
Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.
Sustained attention deficits among HIV-positive individuals with comorbid bipolar disorder.
Posada, Carolina; Moore, David J; Deutsch, Reena; Rooney, Alexandra; Gouaux, Ben; Letendre, Scott; Grant, Igor; Atkinson, J Hampton
2012-01-01
Difficulties with sustained attention have been found among both persons with HIV infection (HIV+) and bipolar disorder (BD). The authors examined sustained attention among 39 HIV+ individuals with BD (HIV+/BD+) and 33 HIV-infected individuals without BD (HIV+/BD-), using the Conners' Continuous Performance Test-II (CPT-II). A Global Assessment of Functioning (GAF) score was also assigned to each participant as an overall indicator of daily functioning abilities. HIV+/BD+ participants had significantly worse performance on CPT-II omission errors, hit reaction time SE (Hit RT SE), variability of SE, and perseverations than HIV+/BD- participants. When examining CPT-II performance over the six study blocks, both HIV+/BD+ and HIV+/BD- participants evidenced worse performance on scores of commission errors and reaction times as the test progressed. The authors also examined the effect of current mood state (i.e., manic, depressive, euthymic) on CPT-II performance, but no significant differences were observed across the various mood states. HIV+/BD+ participants had significantly worse GAF scores than HIV+/BD- participants, which indicates poorer overall functioning in the dually-affected group; among HIV+/BD+ persons, significant negative correlations were found between GAF scores and CPT-II omission and commission errors, detectability, and perseverations, indicating a possible relationship between decrements in sustained attention and worse daily-functioning outcomes.
Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.
2015-01-01
Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1986-01-01
Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.
ERIC Educational Resources Information Center
Breaux, Kristina C.; Avitia, Maria; Koriakin, Taylor; Bray, Melissa A.; DeBiase, Emily; Courville, Troy; Pan, Xingyu; Witholt, Thomas; Grossman, Sandy
2017-01-01
This study investigated the relationship between specific cognitive patterns of strengths and weaknesses and the errors children make on oral language, reading, writing, spelling, and math subtests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants with scores from the KTEA-3 and either the Wechsler Intelligence…
Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D
2009-07-01
The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Jafree, Sara Rizvi; Zakar, Rubeena; Zakar, Muhammad Zakria; Fischer, Florian
2016-01-05
There is an absence of formal error tracking systems in public sector hospitals of Pakistan and also a lack of literature concerning error reporting culture in the health care sector. Nurse practitioners have front-line knowledge and rich exposure about both the organizational culture and error sharing in hospital settings. The aim of this paper was to investigate the association between organizational culture and the culture of error reporting, as perceived by nurses. The authors used the "Practice Environment Scale-Nurse Work Index Revised" to measure the six dimensions of organizational culture. Seven questions were used from the "Survey to Solicit Information about the Culture of Reporting" to measure error reporting culture in the region. Overall, 309 nurses participated in the survey, including female nurses from all designations such as supervisors, instructors, ward-heads, staff nurses and student nurses. We used SPSS 17.0 to perform a factor analysis. Furthermore, descriptive statistics, mean scores and multivariable logistic regression were used for the analysis. Three areas were ranked unfavorably by nurse respondents, including: (i) the error reporting culture, (ii) staffing and resource adequacy, and (iii) nurse foundations for quality of care. Multivariable regression results revealed that all six categories of organizational culture, including: (1) nurse manager ability, leadership and support, (2) nurse participation in hospital affairs, (3) nurse participation in governance, (4) nurse foundations of quality care, (5) nurse-coworkers relations, and (6) nurse staffing and resource adequacy, were positively associated with higher odds of error reporting culture. In addition, it was found that married nurses and nurses on permanent contract were more likely to report errors at the workplace. Public healthcare services of Pakistan can be improved through the promotion of an error reporting culture, reducing staffing and resource shortages and the development of nursing care plans.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
NASA Technical Reports Server (NTRS)
Whiteman, David N.
2003-01-01
In a companion paper, the temperature dependence of Raman scattering and its influence on the Raman and Rayleigh-Mie lidar equations was examined. New forms of the lidar equation were developed to account for this temperature sensitivity. Here those results are used to derive the temperature dependent forms of the equations for the water vapor mixing ratio, aerosol scattering ratio, aerosol backscatter coefficient, and extinction to backscatter ratio (Sa). The error equations are developed, the influence of differential transmission is studied and different laser sources are considered in the analysis. The results indicate that the temperature functions become significant when using narrowband detection. Errors of 5% and more can be introduced in the water vapor mixing ratio calculation at high altitudes and errors larger than 10% are possible for calculations of aerosol scattering ratio and thus aerosol backscatter coefficient and extinction to backscatter ratio.
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
The advanced receiver 2: Telemetry test results in CTA 21
NASA Technical Reports Server (NTRS)
Hinedi, S.; Bevan, R.; Marina, M.
1991-01-01
Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
USDA-ARS?s Scientific Manuscript database
The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are highly expressed in the testis, suggesting an important role in testis biology. Gene coding errors prevent the production of GnRH-II and GnRHR-II in many species, but both genes are functional in swine. We have demo...
Why Does a Method That Fails Continue To Be Used: The Answer
Templeton, Alan R.
2009-01-01
It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Optimizing X-ray mirror thermal performance using matched profile cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Lin; Cocco, Daniele; Kelez, Nicholas
2015-08-07
To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim
2017-01-01
Background: The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. Aim: The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Materials and Methods: Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Statistical Analysis: Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). Results: In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Conclusion: Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex. PMID:29042727
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Heavy flavor decay of Zγ at CDF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timothy M. Harrington-Taber
2013-01-01
Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less
The Red Edge Problem in asteroid band parameter analysis
NASA Astrophysics Data System (ADS)
Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.
2016-04-01
Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.
2017-01-01
In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)
1994-01-01
A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.
NASA Astrophysics Data System (ADS)
Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek
2017-11-01
Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.
Open Label Extension of ISIS 301012 (Mipomersen) to Treat Familial Hypercholesterolemia
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Line profile variations in M giants - Clues to mass-loss and chromospheric heating mechanisms
NASA Technical Reports Server (NTRS)
Judge, P. G.; Luttermoser, D. G.; Neff, D. H.; Cuntz, M.; Stencel, R. E.
1993-01-01
Analysis is presented of time-series, high dispersion spectra of the Mg II, k, Ca II H, and K lines of the semiregular giants Rho Per (M4 II-III, periodicity of about 50 days), R Lyr (M5 III, period of about 46 days), and g Her (M6 III, period of about 90 days). The fine error sensor on the IUE satellite and ground based UBV photometry was used to relate line profile variations to photospheric variations. The above mentioned stars were selected to study the relative importance of convective motions and global stellar pulsations in determining the structure of the outer atmospheres. Small amplitude changes, but substantial changes in the profiles of Mg II and Ca II lines were detected. It is contended that the observed variability is due to changes in chromospheric conditions and not variations within the circumstellar shell. The picture of a steady state chromosphere, which is modulated on long time scales, is corroborated by these observations. Localized heating is found in g Her.
NASA Astrophysics Data System (ADS)
Pickett, Brian K.; Cassen, Patrick; Durisen, Richard H.; Link, Robert
2000-02-01
In the paper ``The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions'' by Brian K. Pickett, Patrick Cassen, Richard H. Durisen, and Robert Link (ApJ, 529, 1034 [2000]), the wrong version of Figure 10 was published as a result of an error at the Press. The correct version of Figure 10 appears below. The Press sincerely regrets this error.
NASA Technical Reports Server (NTRS)
Massa, D.
1980-01-01
This paper discusses systematic errors which arise from exclusive use of the MK system to determine reddening. It is found that implementation of uvby, beta photometry to refine the qualitative MK grid substantially reduces stellar mismatch error. A working definition of 'identical' ubvy, beta types is investigated and the relationship of uvby to B-V color excesses is determined. A comparison is also made of the hydrogen based uvby, beta types with the MK system based on He and metal lines. A small core correlated effective temperature luminosity error in the MK system for the early B stars is observed along with a breakdown of the MK luminosity criteria for the late B stars. The second part investigates the wavelength dependence of interstellar extinction in the ultraviolet wavelength range observed with the TD-1 satellite. In this study the sets of identical stars employed to find reddening are determined more precisely than in previous studies and consist only of normal, nonsupergiant stars. A multivariate analysis of variance techniques in an unbiased coordinate system is used for determining the wavelength dependence of reddening.
Code of Federal Regulations, 2010 CFR
2010-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... errors or omissions that occurred before the publication of these regulations. Any reasonable method used... February 24, 1994, will be considered proper, provided that the method is consistent with the rules of...
Single trial detection of hand poses in human ECoG using CSP based feature extraction.
Kapeller, C; Schneider, C; Kamada, K; Ogawa, H; Kunii, N; Ortner, R; Pruckl, R; Guger, C
2014-01-01
Decoding brain activity of corresponding highlevel tasks may lead to an independent and intuitively controlled Brain-Computer Interface (BCI). Most of today's BCI research focuses on analyzing the electroencephalogram (EEG) which provides only limited spatial and temporal resolution. Derived electrocorticographic (ECoG) signals allow the investigation of spatially highly focused task-related activation within the high-gamma frequency band, making the discrimination of individual finger movements or complex grasping tasks possible. Common spatial patterns (CSP) are commonly used for BCI systems and provide a powerful tool for feature optimization and dimensionality reduction. This work focused on the discrimination of (i) three complex hand movements, as well as (ii) hand movement and idle state. Two subjects S1 and S2 performed single `open', `peace' and `fist' hand poses in multiple trials. Signals in the high-gamma frequency range between 100 and 500 Hz were spatially filtered based on a CSP algorithm for (i) and (ii). Additionally, a manual feature selection approach was tested for (i). A multi-class linear discriminant analysis (LDA) showed for (i) an error rate of 13.89 % / 7.22 % and 18.42 % / 1.17 % for S1 and S2 using manually / CSP selected features, where for (ii) a two class LDA lead to a classification error of 13.39 % and 2.33 % for S1 and S2, respectively.
First-principles energetics of water clusters and ice: A many-body analysis
NASA Astrophysics Data System (ADS)
Gillan, M. J.; Alfè, D.; Bartók, A. P.; Csányi, G.
2013-12-01
Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.
Huang, Sheau-Ling; Hsieh, Ching-Lin; Wu, Ruey-Meei
2017-01-01
Background The Beck Depression Inventory II (BDI-II) and the Taiwan Geriatric Depression Scale (TGDS) are self-report scales used for assessing depression in patients with Parkinson’s disease (PD) and geriatric people. The minimal detectable change (MDC) represents the least amount of change that indicates real difference (i.e., beyond random measurement error) for a single subject. Our aim was to investigate the test-retest reliability and MDC of the BDI-II and the TGDS in people with PD. Methods Seventy patients were recruited from special clinics for movement disorders at a medical center. The patients’ mean age was 67.7 years, and 63.0% of the patients were male. All patients were assessed with the BDI-II and the TGDS twice, 2 weeks apart. We used the intraclass correlation coefficient (ICC) to determine the reliability between test and retest. We calculated the MDC based on standard error of measurement. The MDC% was calculated (i.e., by dividing the MDC by the possible maximal score of the measure). Results The test-retest reliabilities of the BDI-II/TGDS were high (ICC = 0.86/0.89). The MDCs (MDC%s) of the BDI-II and TGDS were 8.7 (13.8%) and 5.4 points (18.0%), respectively. Both measures had acceptable to nearly excellent random measurement errors. Conclusions The test-retest reliabilities of the BDI-II and the TGDS are high. The MDCs of both measures are acceptable to nearly excellent in people with PD. These findings imply that the BDI-II and the TGDS are suitable for use in a research context and in clinical settings to detect real change in a single subject. PMID:28945776
Synergistic Effects of Phase Folding and Wavelet Denoising with Applications in Light Curve Analysis
2016-09-15
future research. 3 II. Astrostatistics Historically, astronomy has been a data-driven science. Larger and more precise data sets have led to the...forthcoming Large Synoptic Survey Telescope (LSST), the human-centric approach to astronomy is becoming strained [13, 24, 25, 63]. More than ever...process. One use of the filtering process is to remove artifacts from the data set. In the context of time domain astronomy , an artifact is an error in
A Test Strategy for High Resolution Image Scanners.
1983-10-01
for multivariate analysis. Holt, Richart and Winston, Inc., New York. Graybill , F.A., 1961: An introduction to linear statistical models . SVolume I...i , j i -(7) 02 1 )2 y 4n .i ij 13 The linear estimation model for the polynomial coefficients can be set up as - =; =(8) with T = ( x’ . . X-nn "X...Resolution Image Scanner MTF Geometrical and radiometric performance Dynamic range, linearity , noise - Dynamic scanning errors Response uniformity Skewness of
NASA Technical Reports Server (NTRS)
Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert
2004-01-01
The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Dawdy, M R; Munter, D W; Gilmore, R A
1997-03-01
This study was designed to examine the relationship between patient entry rates (a measure of physician work load) and documentation errors/omissions in both handwritten and dictated emergency treatment records. The study was carried out in two phases. Phase I examined handwritten records and Phase II examined dictated and transcribed records. A total of 838 charts for three common chief complaints (chest pain, abdominal pain, asthma/chronic obstructive pulmonary disease) were retrospectively reviewed and scored for the presence or absence of 11 predetermined criteria. Patient entry rates were determined by reviewing the emergency department patient registration logs. The data were analyzed using simple correlation and linear regression analysis. A positive correlation was found between patient entry rates and documentation errors in handwritten charts. No such correlation was found in the dictated charts. We conclude that work load may negatively affect documentation accuracy when charts are handwritten. However, the use of dictation services may minimize or eliminate this effect.
Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Angel; Kohler, Johannes
2014-01-01
Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design : A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and ≤ 6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. A total sample of 364 children aged 3-11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research.
Drought Persistence Errors in Global Climate Models
NASA Astrophysics Data System (ADS)
Moon, H.; Gudmundsson, L.; Seneviratne, S. I.
2018-04-01
The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.
Contrast sensitivity and its determinants in people with diabetes: SN-DREAMS-II, Report No 6
Gella, L; Raman, R; Pal, S S; Ganesan, S; Sharma, T
2017-01-01
Purpose To assess contrast sensitivity (CS) and to elucidate the factors associated with CS among subjects with type 2 diabetes in a cross-sectional population-based study. Patients and methods Subjects were recruited from a follow-up cohort, Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular genetics Study (SN-DREAMS II). Of 958 subjects who were followed up in SN-DREAMS II, a subset of 653 subjects was included in the analysis. All subjects underwent a comprehensive eye examination, which included CS assessment using the Pelli–Robson chart. The cross-sectional association between CS and independent variables was assessed using stepwise linear regression analysis. A P-value of <0.05 was considered statistically significant. Results The mean age of the study sample was 58.7±9.41 (44–87) years. Mean CS of the study sample was 1.32±0.20 (range: 0–1.65) log units. CS was negatively and significantly correlated with age, duration of diabetes, hemoglobin level, vibration perception threshold (VPT) value, albuminuria, best corrected visual acuity (BCVA), refractive error, total error score (TEM) of FM 100 hue test, and mean retinal sensitivity. In multiple regression analysis, after adjusting for all the related factors, CS was significantly associated with BCVA (β=−0.575; P<0.001), VPT (β=−0.003; P=0.010), severity of cataract (β=−0.018; P=0.032), diabetic retinopathy (β=−0.016; P=0.019), and age (β=−0.002; P=0.029). These factors explained about 29.3% of the variation in CS. Conclusion Among the factors evaluated, differences in BCVA were associated with the largest predicted differences in CS. This association of CS with visual acuity highlights the important role of visual assessment in type 2 diabetes. PMID:27858934
Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.
1987-07-01
detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication
NASA Technical Reports Server (NTRS)
Bartman, R. K.; Youmans, B. R.; Nerheim, N. M.
1987-01-01
The Jet Propulsion Laboratory is developing a fiber optic rotation sensor (FORS) for use on the Mariner Mark II series of planetary explorer craft and in other space applications. FORS is a closed-loop phase-nulling device and embodies a number of interesting innovations. Chief among these are the incorporation of the device's couplers, phase modulators, and polarizer on a single lithium niobate (LinbO3) integrate optics chip and a novel means of reading out angular position and rotation rate based on optical beat detection. Various aspects of the FORS design and operation are described and discussed. Particular attention is paid to analyzing errors attributable to polarizer imperfection and the so-called residual Michelson effect.
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, D. K.; Taylor, A. S.; Edwards, T.B.
2005-06-26
The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Correction of Quenching Errors in Analytical Fluorimetry through Use of Time Resolution.
1980-05-27
QUENCHING ERRORS IN ANALYTICAL FLUORIMETRY THROUGH USE OF TIME RESOLUTION by Gary M. Hieftje and Gilbert R. Haugen Prepared for Publication in... HIEFTJE , 6 R HAUGEN NOCOIT1-6-0638 UCLASSIFIED TR-25 NL ///I//II IIIII I__I. 111122 Z .. ..12 1.~l8 .2 -4 SECuRITY CLSIIAI1 orTI PAGE MWhno. ee...in Analytical and Clinical Chemistry, vol. 3, D. M. Hercules, G. M. Hieftje , L. R. Snyder, and M4. A. Evenson, eds., Plenum Press, N.Y., 1978, ch. S
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf
2012-01-01
Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.
Archie to SAM: A Short Operational History of Ground-Based Air Defense
2005-08-01
World War II era. Notes 1. Edward Westermann, Flak: German Anti-Aircraft Defenses 1914–1945 (Lawrence, Kans.: University Press, 2001), 9. 2. Ibid., 10–16...December 1945), 255–56; and Hogg, Anti-Aircraft, 115. 74. Thomas Edwards and Murray Gelster, “The Causes of Bombing Errors as Determined from Analysis of...Tactics over North Vietnam” (thesis, Air Command and Staff College, Maxwell AFB, Ala., 1975), 13, AUL. 21. Giles Van Nederveen, “Sparks over Vietnam: The
A Regional Analysis of Non-Methane Hydrocarbons And Meteorology of The Rural Southeast United States
1996-01-01
Zt is an ARIMA time series. This is a typical regression model , except that it allows for autocorrelation in the error term Z. In this work, an ARMA...data=folder; var residual; run; II Statistical output of 1992 regression model on 1993 ozone data ARIMA Procedure Maximum Likelihood Estimation Approx...at each of the sites, and to show the effect of synoptic meteorology on high ozone by examining NOAA daily weather maps and climatic data
Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett
2014-09-01
Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.
Use of genetically engineered swine to elucidate testis function in the boar
USDA-ARS?s Scientific Manuscript database
The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are abundant within the testis, suggesting a critical role. Gene coding errors prevent their production in many species, but both genes are functional in swine. We have demonstrated that GnRHR-II localizes to porcine Le...
Code of Federal Regulations, 2011 CFR
2011-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2013 CFR
2013-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2014 CFR
2014-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...
Shape analysis of H II regions - I. Statistical clustering
NASA Astrophysics Data System (ADS)
Campbell-White, Justyn; Froebrich, Dirk; Kume, Alfred
2018-07-01
We present here our shape analysis method for a sample of 76 Galactic H II regions from MAGPIS 1.4 GHz data. The main goal is to determine whether physical properties and initial conditions of massive star cluster formation are linked to the shape of the regions. We outline a systematic procedure for extracting region shapes and perform hierarchical clustering on the shape data. We identified six groups that categorize H II regions by common morphologies. We confirmed the validity of these groupings by bootstrap re-sampling and the ordinance technique multidimensional scaling. We then investigated associations between physical parameters and the assigned groups. Location is mostly independent of group, with a small preference for regions of similar longitudes to share common morphologies. The shapes are homogeneously distributed across Galactocentric distance and latitude. One group contains regions that are all younger than 0.5 Myr and ionized by low- to intermediate-mass sources. Those in another group are all driven by intermediate- to high-mass sources. One group was distinctly separated from the other five and contained regions at the surface brightness detection limit for the survey. We find that our hierarchical procedure is most sensitive to the spatial sampling resolution used, which is determined for each region from its distance. We discuss how these errors can be further quantified and reduced in future work by utilizing synthetic observations from numerical simulations of H II regions. We also outline how this shape analysis has further applications to other diffuse astronomical objects.
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1982-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, M. P.; Lawler, J. E.; Sneden, C.
2013-10-01
Atomic transition probability measurements for 364 lines of Ti II in the UV through near-IR are reported. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a new echelle spectrometer are combined with published radiative lifetimes to determine these transition probabilities. The new results are in generally good agreement with previously reported FTS measurements. Use of the new echelle spectrometer, independent radiometric calibration methods, and independent data analysis routines enables a reduction of systematic errors and overall improvement in transition probability accuracy over previous measurements. The new Ti II data are applied to high-resolution visible and UVmore » spectra of the Sun and metal-poor star HD 84937 to derive new, more accurate Ti abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. The Ti abundances derived using Ti II for these two stars match those derived using Ti I and support the relative Ti/Fe abundance ratio versus metallicity seen in previous studies.« less
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Yahyaoui, Raquel; Espinosa, María Gracia; Gómez, Celia; Dayaldasani, Anita; Rueda, Inmaculada; Roldán, Ana; Ugarte, Magdalena; Lastra, Gonzalo; Pérez, Vidal
2011-11-01
Neonatal onset of carnitine palmitoyltransferase II (CPT II) deficiency is an autosomal recessive, often lethal disorder of the mitochondrial beta-oxidation of long-chain fatty acids. It is a rare multiorgan disease which includes hypoketotic hypoglycemia, severe hepatomuscular symptoms, cardiac abnormalities, seizures and lethargy, as well as dysmorphic features. Until now, only 22 affected families have been described in the literature. An increasing number of mutations are being identified in the CPT2 gene, with a distinct genotype-phenotype correlation in most cases. Herein we report a new case of neonatal CPT II deficiency associated with Dandy-Walker syndrome and sudden death at 13 days of life. CPT II deficiency was suggested by acylcarnitine analysis of dried-blood on filter paper in the expanded newborn screening. Genetic analysis of the CPT2 gene identified the presence of a previously described mutation in homozygosity (c.534_558del25bpinsT). All lethal neonatal CPT II deficiency patients previously described presented severe symptoms during the first week of life, although this was not the case in our patient, who remained stable and without apparent vital risk during the first 11 days of life. The introduction of tandem mass spectrometry to newborn screening has substantially improved our ability to detect metabolic diseases in the newborn period. This case illustrates the value of expanded newborn screening in a neonate with an unusual clinical presentation, combining hydrocephalus and sudden death, that might not commonly lead to the suspicion of an inborn error of metabolism. Copyright © 2011 Elsevier Inc. All rights reserved.
Super-global distortion correction for a rotational C-arm x-ray image intensifier.
Liu, R R; Rudin, S; Bednarek, D R
1999-09-01
Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.
Uncertainties of predictions from parton distributions II: theoretical errors
NASA Astrophysics Data System (ADS)
Martin, A. D.; Roberts, R. G.; Stirling, W. J.; Thorne, R. S.
2004-06-01
We study the uncertainties in parton distributions, determined in global fits to deep inelastic and related hard scattering data, due to so-called theoretical errors. Amongst these, we include potential errors due to the change of perturbative order (NLO to NNLO), ln(1/x) and ln(1-x) effects, absorptive corrections and higher-twist contributions. We investigate these uncertainties both by including explicit corrections to our standard global analysis and by examining the sensitivity to changes of the x, Q 2, W 2 cuts on the data that are fitted. In this way we expose those kinematic regions where the conventional DGLAP description is inadequate. As a consequence we obtain a set of NLO, and of NNLO, conservative partons where the data are fully consistent with DGLAP evolution, but over a restricted kinematic domain. We also examine the potential effects of such issues as the choice of input parametrisation, heavy target corrections, assumptions about the strange quark sea and isospin violation. Hence we are able to compare the theoretical errors with those uncertainties due to errors on the experimental measurements, which we studied previously. We use W and Higgs boson production at the Tevatron and the LHC as explicit examples of the uncertainties arising from parton distributions. For many observables the theoretical error is dominant, but for the cross section for W production at the Tevatron both the theoretical and experimental uncertainties are small, and hence the NNLO prediction may serve as a valuable luminosity monitor.
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
Kataoka, Takeshi; Tsutahara, Michihisa
2010-11-01
The accuracy of the lattice Boltzmann method (LBM) for describing the behavior of a gas in the continuum limit is systematically investigated. The asymptotic analysis for small Knudsen numbers is carried out to derive the corresponding fluid-dynamics-type equations, and the errors of the LBM are estimated by comparing them with the correct fluid-dynamics-type equations. We discuss the following three important cases: (I) the Mach number of the flow is much smaller than the Knudsen number, (II) the Mach number is of the same order as the Knudsen number, and (III) the Mach number is finite. From the von Karman relation, the above three cases correspond to the flows of (I) small Reynolds number, (II) finite Reynolds number, and (III) large Reynolds number, respectively. The analysis is made with the information only of the fundamental properties of the lattice Boltzmann models without stepping into their detailed form. The results are therefore applicable to various lattice Boltzmann models that satisfy the fundamental properties used in the analysis.
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Ángel; Kohler, Johannes
2014-01-01
Background Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research. PMID:24560253
Neuzil, C.E.; Cooley, C.; Silliman, Stephen E.; Bredehoeft, J.D.; Hsieh, P.A.
1981-01-01
In Part I a general analytical solution for the transient pulse test was presented. Part II presents a graphical method for analyzing data from a test to obtain the hydraulic properties of the sample. The general solution depends on both hydraulic conductivity and specific storage and, in theory, analysis of the data can provide values for both of these hydraulic properties. However, in practice, one of two limiting cases may apply in which case it is possible to calculate only hydraulic conductivity or the product of hydraulic conductivity times specific storage. In this paper we examine the conditions when both hydraulic parameters can be calculated. The analyses of data from two tests are presented. In Appendix I the general solution presented in Part I is compared with an earlier analysis, in which compressive storage in the sample is assumed negligible, and the error in calculated hydraulic conductivity due to this simplifying assumption is examined. ?? 1981.
2016-08-01
Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders
Opioid receptors regulate blocking and overexpectation of fear learning in conditioned suppression.
Arico, Carolyn; McNally, Gavan P
2014-04-01
Endogenous opioids play an important role in prediction error during fear learning. However, the evidence for this role has been obtained almost exclusively using the species-specific defense response of freezing as the measure of learned fear. It is unknown whether opioid receptors regulate predictive fear learning when other measures of learned fear are used. Here, we used conditioned suppression as the measure of learned fear to assess the role of opioid receptors in fear learning. Experiment 1a studied associative blocking of fear learning. Rats in an experimental group received conditioned stimulus A (CSA) + training in Stage I and conditioned stimulus A and B (CSAB) + training in Stage II, whereas rats in a control group received only CSAB + training in Stage II. The prior fear conditioning of CSA blocked fear learning to conditioned stimulus B (CSB) in the experimental group. In Experiment 1b, naloxone (4 mg/kg) administered before Stage II prevented this blocking, thereby enabling normal fear learning to CSB. Experiment 2a studied overexpectation of fear. Rats received CSA + training and CSB + training in Stage I, and then rats in the experimental group received CSAB + training in Stage II whereas control rats did not. The Stage II compound training of CSAB reduced fear to CSA and CSB on test. In Experiment 2b, naloxone (4 mg/kg) administered before Stage II prevented this overexpectation. These results show that opioid receptors regulate Pavlovian fear learning, augmenting learning in response to positive prediction error and impairing learning in response to negative prediction error, when fear is assessed via conditioned suppression. These effects are identical to those observed when freezing is used as the measure of learned fear. These findings show that the role for opioid receptors in regulating fear learning extends across multiple measures of learned fear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix B provides a description of Browns Ferry, Unit 1, plant systems and the failure evaluation of those systems as they apply to accidents at Browns Ferry. Information is presented concerning front-line system fault analysis; support system fault analysis; human error models andmore » probabilities; and generic control circuit analyses.« less
Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors
Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.
2013-01-01
The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.
Youth Attitude Tracking Study II Wave 17 -- Fall 1986.
1987-06-01
decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures
Code of Federal Regulations, 2013 CFR
2013-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2012 CFR
2012-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2011 CFR
2011-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Code of Federal Regulations, 2010 CFR
2010-07-01
... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...
Zhang, L; Price, R; Aweeka, F; Bellibas, S E; Sheiner, L B
2001-02-01
A small-scale clinical investigation was done to quantify the penetration of stavudine (D4T) into cerebrospinal fluid (CSF). A model-based analysis estimates the steady-state ratio of AUCs of CSF and plasma concentrations (R(AUC)) to be 0.270, and the mean residence time of drug in the CSF to be 7.04 h. The analysis illustrates the advantages of a causal (scientific, predictive) model-based approach to analysis over a noncausal (empirical, descriptive) approach when the data, as here, demonstrate certain problematic features commonly encountered in clinical data, namely (i) few subjects, (ii) sparse sampling, (iii) repeated measures, (iv) imbalance, and (v) individual design variation. These features generally require special attention in data analysis. The causal-model-based analysis deals with features (i) and (ii), both of which reduce efficiency, by combining data from different studies and adding subject-matter prior information. It deals with features (iii)--(v), all of which prevent 'averaging' individual data points directly, first, by adjusting in the model for interindividual data differences due to design differences, secondly, by explicitly differentiating between interpatient, interoccasion, and measurement error variation, and lastly, by defining a scientifically meaningful estimand (R(AUC)) that is independent of design.
Quantum error-correcting code for ternary logic
NASA Astrophysics Data System (ADS)
Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita
2018-05-01
Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.
Opioid receptors mediate direct predictive fear learning: evidence from one-trial blocking.
Cole, Sindy; McNally, Gavan P
2007-04-01
Pavlovian fear learning depends on predictive error, so that fear learning occurs when the actual outcome of a conditioning trial exceeds the expected outcome. Previous research has shown that opioid receptors, including mu-opioid receptors in the ventrolateral quadrant of the midbrain periaqueductal gray (vlPAG), mediate such predictive fear learning. Four experiments reported here used a within-subject one-trial blocking design to study whether opioid receptors mediate a direct or indirect action of predictive error on Pavlovian association formation. In Stage I, rats were trained to fear conditioned stimulus (CS) A by pairing it with shock. In Stage II, CSA and CSB were co-presented once and co-terminated with shock. Two novel stimuli, CSC and CSD, were also co-presented once and co-terminated with shock in Stage II. The results showed one-trial blocking of fear learning (Experiment 1) as well as one-trial unblocking of fear learning when Stage II training employed a higher intensity footshock than was used in Stage I (Experiment 2). Systemic administrations of the opioid receptor antagonist naloxone (Experiment 3) or intra-vlPAG administrations of the selective mu-opioid receptor antagonist CTAP (Experiment 4) prior to Stage II training prevented one-trial blocking. These results show that opioid receptors mediate the direct actions of predictive error on Pavlovian association formation.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Microcomputer package for statistical analysis of microbial populations.
Lacroix, J M; Lavoie, M C
1987-11-01
We have developed a Pascal system to compare microbial populations from different ecological sites using microcomputers. The values calculated are: the coverage value and its standard error, the minimum similarity and the geometric similarity between two biological samples, and the Lambda test consisting of calculating the ratio of the mean similarity between two subsets by the mean similarity within subsets. This system is written for Apple II, IBM or compatible computers, but it can work for any computer which can use CP/M, if the programs are recompiled for such a system.
Bayesian operational modal analysis with asynchronous data, Part II: Posterior uncertainty
NASA Astrophysics Data System (ADS)
Zhu, Yi-Chen; Au, Siu-Kui
2018-01-01
A Bayesian modal identification method has been proposed in the companion paper that allows the most probable values of modal parameters to be determined using asynchronous ambient vibration data. This paper investigates the identification uncertainty of modal parameters in terms of their posterior covariance matrix. Computational issues are addressed. Analytical expressions are derived to allow the posterior covariance matrix to be evaluated accurately and efficiently. Synthetic, laboratory and field data examples are presented to verify the consistency, investigate potential modelling error and demonstrate practical applications.
Kanda, Junya
2016-01-01
The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.
The efficacy of three objective systems for identifying beef cuts that can be guaranteed tender.
Wheeler, T L; Vote, D; Leheska, J M; Shackelford, S D; Belk, K E; Wulf, D M; Gwartney, B L; Koohmaraie, M
2002-12-01
The objective of this study was to determine the accuracy of three objective systems (prototype BeefCam, colorimeter, and slice shear force) for identifying guaranteed tender beef. In Phase I, 308 carcasses (105 Top Choice, 101 Low Choice, and 102 Select) from two commercial plants were tested. In Phase II, 400 carcasses (200 rolled USDA Select and 200 rolled USDA Choice) from one commercial plant were tested. The three systems were evaluated based on progressive certification of the longissimus as "tender" in 10% increments (the best 10, 20, 30%, etc., certified as "tender" by each technology; 100% certification would mean no sorting for tenderness). In Phase I, the error (percentage of carcasses certified as tender that had Warner-Bratzler shear force of > or = 5 kg at 14 d postmortem) for 100% certification using all carcasses was 14.1%. All certification levels up to 80% (slice shear force) and up to 70% (colorimeter) had less error (P < 0.05) than 100% certification. Errors in all levels of certification by prototype BeefCam (13.8 to 9.7%) were not different (P > 0.05) from 100% certification. In Phase I, the error for 100% certification for USDA Select carcasses was 30.7%. For Select carcasses, all slice shear force certification levels up to 60% (0 to 14.8%) had less error (P < 0.05) than 100% certification. For Select carcasses, errors in all levels of certification by colorimeter (20.0 to 29.6%) and by BeefCam (27.5 to 31.4%) were not different (P > 0.05) from 100% certification. In Phase II, the error for 100% certification for all carcasses was 9.3%. For all levels of slice shear force certification less than 90% (for all carcasses) or less than 80% (Select carcasses), errors in tenderness certification were less than (P < 0.05) for 100% certification. In Phase II, for all carcasses or Select carcasses, colorimeter and prototype BeefCam certifications did not significantly reduce errors (P > 0.05) compared to 100% certification. Thus, the direct measure of tenderness provided by slice shear force results in more accurate identification of "tender" beef carcasses than either of the indirect technologies, prototype BeefCam, or colorimeter, particularly for USDA Select carcasses. As tested in this study, slice shear force, but not the prototype BeefCam or colorimeter systems, accurately identified "tender" beef.
Beam pointing angle optimization and experiments for vehicle laser Doppler velocimetry
NASA Astrophysics Data System (ADS)
Fan, Zhe; Hu, Shuling; Zhang, Chunxi; Nie, Yanju; Li, Jun
2015-10-01
Beam pointing angle (BPA) is one of the key parameters that affects the operation performance of the laser Doppler velocimetry (LDV) system. By considering velocity sensitivity and echo power, for the first time, the optimized BPA of vehicle LDV is analyzed. Assuming mounting error is within ±1.0 deg, the reflectivity and roughness are variable for different scenarios, the optimized BPA is obtained in the range from 29 to 43 deg. Therefore, velocity sensitivity is in the range of 1.25 to 1.76 MHz/(m/s), and the percentage of normalized echo power at optimized BPA with respect to that at 0 deg is greater than 53.49%. Laboratory experiments with a rotating table are done with different BPAs of 10, 35, and 66 deg, and the results coincide with the theoretical analysis. Further, vehicle experiment with optimized BPA of 35 deg is conducted by comparison with microwave radar (accuracy of ±0.5% full scale output). The root-mean-square error of LDV's results is smaller than the Microstar II's, 0.0202 and 0.1495 m/s, corresponding to LDV and Microstar II, respectively, and the mean velocity discrepancy is 0.032 m/s. It is also proven that with the optimized BPA both high velocity sensitivity and acceptable echo power can simultaneously be guaranteed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.
1992-03-01
The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Precision of spiral-bevel gears
NASA Technical Reports Server (NTRS)
Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.
1983-01-01
The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de
We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less
Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E
2012-12-01
Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.
Statistical methods for astronomical data with upper limits. II - Correlation and regression
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Nelson, P. I.
1986-01-01
Statistical methods for calculating correlations and regressions in bivariate censored data where the dependent variable can have upper or lower limits are presented. Cox's regression and the generalization of Kendall's rank correlation coefficient provide significant levels of correlations, and the EM algorithm, under the assumption of normally distributed errors, and its nonparametric analog using the Kaplan-Meier estimator, give estimates for the slope of a regression line. Monte Carlo simulations demonstrate that survival analysis is reliable in determining correlations between luminosities at different bands. Survival analysis is applied to CO emission in infrared galaxies, X-ray emission in radio galaxies, H-alpha emission in cooling cluster cores, and radio emission in Seyfert galaxies.
Lachinova, Svetlana L; Vorontsov, Mikhail A
2008-08-01
We analyze the potential efficiency of laser beam projection onto a remote object in atmosphere with incoherent and coherent phase-locked conformal-beam director systems composed of an adaptive array of fiber collimators. Adaptive optics compensation of turbulence-induced phase aberrations in these systems is performed at each fiber collimator. Our analysis is based on a derived expression for the atmospheric-averaged value of the mean square residual phase error as well as direct numerical simulations. Operation of both conformal-beam projection systems is compared for various adaptive system configurations characterized by the number of fiber collimators, the adaptive compensation resolution, and atmospheric turbulence conditions.
NASA Technical Reports Server (NTRS)
Pastor, P. Rick; Bishop, Robert H.; Striepe, Scott A.
2000-01-01
A first order simulation analysis of the navigation accuracy expected from various Navigation Quick-Look data sets is performed. Here quick-look navigation data are observations obtained by hypothetical telemetried data transmitted on the fly during a Mars probe's atmospheric entry. In this simulation study, navigation data consists of 3-axis accelerometer sensor and attitude information data. Three entry vehicle guidance types are studied: I. a Maneuvering entry vehicle (as with Mars 01 guidance where angle of attack and bank angle are controlled); II. Zero angle-of-attack controlled entry vehicle (as with Mars 98); and III. Ballistic, or spin stabilized entry vehicle (as with Mars Pathfinder);. For each type, sensitivity to progressively under sampled navigation data and inclusion of sensor errors are characterized. Attempts to mitigate the reconstructed trajectory errors, including smoothing, interpolation and changing integrator characteristics are also studied.
Entangled quantum key distribution over two free-space optical links.
Erven, C; Couteau, C; Laflamme, R; Weihs, G
2008-10-13
We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.
Multiple imputation of missing fMRI data in whole brain analysis
Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.
2012-01-01
Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925
[Modeling of sugar content based on NIRS during cider-making fermentation].
Peng, Bang-Zhu; Yue, Tian-Li; Yuan, Ya-Hong; Gao, Zhen-Peng
2009-03-01
The sugar content and the matrix always are being changed during cider-making fermentation. In order to measure and monitor sugar content accurately and rapidly, it is necessary for the spectra to be sorted. Calibration models were established at different fermentation stages based on near infrared spectroscopy with artificial neural network. NIR spectral data were collected in the spectral region of 12 000-4 000 cm(-1) for the next analysis. After the different conditions for modeling sugar content were analyzed and discussed, the results indicated that the calibration models developed by the spectral data pretreatment of straight line subtraction(SLS) in the characteristic absorption spectra ranges of 7 502-6 472.1 cm(-1) at stage I and 6 102-5 446.2 cm(-1) at stage II were the best for sugar content. The result of comparison of different data pretreatment methods for establishing calibration model showed that the correlation coefficients of the models (R2) for stage I and II were 98.93% and 99.34% respectively and the root mean square errors of cross validation(RMSECV) for stage I and II were 4.42 and 1.21 g x L(-1) respectively. Then the models were tested and the results showed that the root mean square error of prediction (RMSEP) was 4.07 g x L(-1) and 1.13 g x L(-1) respectively. These demonstrated that the models the authors established are very well and can be applied to quick determination and monitoring of sugar content during cider-making fermentation.
Teaching Statistics with Minitab II.
ERIC Educational Resources Information Center
Ryan, T. A., Jr.; And Others
Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…
Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju
2015-01-01
Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982
Sengupta, Auntora; McNally, Gavan P
2014-01-01
Fear learning occurs in response to positive prediction error, when the expected outcome of a conditioning trial exceeds that predicted by the conditioned stimuli present. This role for error in Pavlovian association formation is best exemplified by the phenomenon of associative blocking, whereby prior fear conditioning of conditioned stimulus (CS) A is able to prevent learning to CSB when they are conditioned in compound. The midline and intralaminar thalamic nuclei (MIT) are well-placed to contribute to fear prediction error because they receive extensive projections from the midbrain periaqueductal gray-which has a key role in fear prediction error-and project extensively to prefrontal cortex and amygdala. Here we used an associative blocking design to study the role of MIT in fear learning. In Stage I rats were trained to fear CSA via pairings with shock. In Stage II rats received compound fear conditioning of CSAB paired with shock. On test, rats that received Stage I training expressed less fear to CSB relative to control rats that did not receive this training. Microinjection of bupivacaine into MIT prior to Stage II training had no effect on the expression of fear during Stage II and had no effect on fear learning in controls, but prevented associative blocking and so enabled fear learning to CSB. These results show an important role for MIT in predictive fear learning and are discussed with reference to previous findings implicating the midline and posterior intralaminar thalamus in fear learning and fear responding.
Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data
George, Brandon; Aban, Inmaculada
2014-01-01
Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361
A spline-based approach for computing spatial impulse responses.
Ellis, Michael A; Guenther, Drake; Walker, William F
2007-05-01
Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.
Research on Spectroscopy, Opacity, and Atmospheres
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1996-01-01
I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration.
Bulk Fermi Surfaces of the Dirac Type-II Semimetallic Candidates M Al3 (Where M =V , Nb, and Ta)
NASA Astrophysics Data System (ADS)
Chen, K.-W.; Lian, X.; Lai, Y.; Aryal, N.; Chiu, Y.-C.; Lan, W.; Graf, D.; Manousakis, E.; Baumbach, R. E.; Balicas, L.
2018-05-01
We report a de Haas-van Alphen (dHvA) effect study on the Dirac type-II semimetallic candidates M Al3 (where, M =V , Nb and Ta). The angular dependence of their Fermi surface (FS) cross-sectional areas reveals a remarkably good agreement with our first-principles calculations. Therefore, dHvA supports the existence of tilted Dirac cones with Dirac type-II nodes located at 100, 230 and 250 meV above the Fermi level ɛF for VAl3 , NbAl3 and TaAl3 respectively, in agreement with the prediction of broken Lorentz invariance in these compounds. However, for all three compounds we find that the cyclotron orbits on their FSs, including an orbit nearly enclosing the Dirac type-II node, yield trivial Berry phases. We explain this via an analysis of the Berry phase where the position of this orbit, relative to the Dirac node, is adjusted within the error implied by the small disagreement between our calculations and the experiments. We suggest that a very small amount of doping could displace ɛF to produce topologically nontrivial orbits encircling their Dirac node(s).
46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...
46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...
46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...
46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...
46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Validation, Edits, and Application Processing Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
The impact of quality assurance procedures on the correct award of Basic Educational Opportunity Grants (BEOGs) for 1979-1980 was assessed, and a model for detecting error-prone applications early in processing was developed. The Bureau of Student Financial Aid introduced new comments into the edit system in 1979 and expanded the pre-established…
Two-stage phase II oncology designs using short-term endpoints for early stopping.
Kunz, Cornelia U; Wason, James Ms; Kieser, Meinhard
2017-08-01
Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
Khanna, Rajesh; Handa, Aashish; Virk, Rupam Kaur; Ghai, Deepika; Handa, Rajni Sharma; Goel, Asim
2017-01-01
The process of cleaning and shaping the canal is not an easy goal to obtain, as canal curvature played a significant role during the instrumentation of the curved canals. The present in vivo study was conducted to evaluate procedural errors during the preparation of curved root canals using hand Nitiflex and rotary K3XF instruments. Procedural errors such as ledge formation, instrument separation, and perforation (apical, furcal, strip) were determined in sixty patients, divided into two groups. In Group I, thirty teeth in thirty patients were prepared using hand Nitiflex system, and in Group II, thirty teeth in thirty patients were prepared using K3XF rotary system. The evaluation was done clinically as well as radiographically. The results recorded from both groups were compiled and put to statistical analysis. Chi-square test was used to compare the procedural errors (instrument separation, ledge formation, and perforation). In the present study, both hand Nitiflex and rotary K3XF showed ledge formation and instrument separation. Although ledge formation and instrument separation by rotary K3XF file system was less as compared to hand Nitiflex. No perforation was seen in both the instrument groups. Canal curvature played a significant role during the instrumentation of the curved canals. Procedural errors such as ledge formation and instrument separation by rotary K3XF file system were less as compared to hand Nitiflex.
Measures of rowing performance.
Smith, T Brett; Hopkins, Will G
2012-04-01
Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.
PacRIM II: A review of AirSAR operations and system performance
NASA Technical Reports Server (NTRS)
Moller, D.; Chu, A.; Lou, Y.; Miller, T.; O'Leary, E.
2001-01-01
In this paper we briefly review the AirSAR system, its expected performance, and quality of data obtained during that mission. We discuss the system hardware calibration methodologies, and present quantitative performance values of radar backscatter and interferometric height errors (random and systematic) from PACRIM II calibration data.
Type-II generalized family-wise error rate formulas with application to sample size determination.
Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie
2016-07-20
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U
2017-12-01
We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.
Scan Line Difference Compression Algorithm Simulation Study.
1985-08-01
introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The
When Is a Failure to Replicate Not a Type II Error?
ERIC Educational Resources Information Center
Vasconcelos, Marco; Urcuioli, Peter J.; Lionello-DeNolf, Karen M.
2007-01-01
Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our…
Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…
NASA Astrophysics Data System (ADS)
Fellers, R. S.; Braly, L. B.; Saykally, R. J.; Leforestier, C.
1999-04-01
The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data.
NASA Technical Reports Server (NTRS)
Rind, D.; Chiou, E.-W.; Chu, W.; Oltmans, S.; Lerner, J.; Larsen, J.; Mccormick, M. P.; Mcmaster, L.
1993-01-01
Results are presented of water vapor observations in the troposphere and stratosphere performed by the Stratospheric Aerosol and Gas Experiment II solar occultation instrument, and the analysis procedure, the instrument errors, and data characteristics are discussed. The results are compared with correlative in situ measurements and other satellite data. The features of the data set collected between 1985 and 1989 include an increase in middle- and upper-tropospheric water vapor during northern hemisphere summer and autumn; minimum water vapor values of 2.5-3 ppmv in the tropical lower stratosphere; slowly increasing water vapor values with altitude in the stratosphere, reaching 5-6 ppmv or greater near the stratopause; extratropical values with minimum profile amounts occurring above the conventionally defined tropopause; and higher extratropical than tropical water vapor values throughout the stratosphere except in locations of possible polar stratospheric clouds.
NASA Technical Reports Server (NTRS)
Watson, Leela R.
2011-01-01
The 45th Weather Squadron Launch Weather Officers use the 12-km resolution North American Mesoscale model (MesoNAM) forecasts to support launch weather operations. In Phase I, the performance of the model at KSC/CCAFS was measured objectively by conducting a detailed statistical analysis of model output compared to observed values. The objective analysis compared the MesoNAM forecast winds, temperature, and dew point to the observed values from the sensors in the KSC/CCAFS wind tower network. In Phase II, the AMU modified the current tool by adding an additional 15 months of model output to the database and recalculating the verification statistics. The bias, standard deviation of bias, Root Mean Square Error, and Hypothesis test for bias were calculated to verify the performance of the model. The results indicated that the accuracy decreased as the forecast progressed, there was a diurnal signal in temperature with a cool bias during the late night and a warm bias during the afternoon, and there was a diurnal signal in dewpoint temperature with a low bias during the afternoon and a high bias during the late night.
Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L
2013-01-01
Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.
From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E
NASA Astrophysics Data System (ADS)
Berthier, Denis
In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.
Xu, Yiling; Oh, Heesoo; Lagravère, Manuel O
2017-09-01
The purpose of this study was to locate traditionally-used landmarks in two-dimensional (2D) images and newly-suggested ones in three-dimensional (3D) images (cone-beam computer tomographies [CBCTs]) and determine possible relationships between them to categorize patients with Class II-1 malocclusion. CBCTs from 30 patients diagnosed with Class II-1 malocclusion were obtained from the University of Alberta Graduate Orthodontic Program database. The reconstructed images were downloaded and visualized using the software platform AVIZO ® . Forty-two landmarks were chosen and the coordinates were then obtained and analyzed using linear and angular measurements. Ten images were analyzed three times to determine the reliability and measurement error of each landmark using Intra-Class Correlation coefficient (ICC). Descriptive statistics were done using the SPSS statistical package to determine any relationships. ICC values were excellent for all landmarks in all axes, with the highest measurement error of 2mm in the y-axis for the Gonion Left landmark. Linear and angular measurements were calculated using the coordinates of each landmark. Descriptive statistics showed that the linear and angular measurements used in the 2D images did not correlate well with the 3D images. The lowest standard deviation obtained was 0.6709 for S-GoR/N-Me, with a mean of 0.8016. The highest standard deviation was 20.20704 for ANS-InfraL, with a mean of 41.006. The traditional landmarks used for 2D malocclusion analysis show good reliability when transferred to 3D images. However, they did not reveal specific skeletal or dental patterns when trying to analyze 3D images for malocclusion. Thus, another technique should be considered when classifying 3D CBCT images for Class II-1malocclusion. Copyright © 2017 CEO. Published by Elsevier Masson SAS. All rights reserved.
AMPLISAS: a web server for multilocus genotyping using next-generation amplicon sequencing data.
Sebastian, Alvaro; Herdegen, Magdalena; Migalska, Magdalena; Radwan, Jacek
2016-03-01
Next-generation sequencing (NGS) technologies are revolutionizing the fields of biology and medicine as powerful tools for amplicon sequencing (AS). Using combinations of primers and barcodes, it is possible to sequence targeted genomic regions with deep coverage for hundreds, even thousands, of individuals in a single experiment. This is extremely valuable for the genotyping of gene families in which locus-specific primers are often difficult to design, such as the major histocompatibility complex (MHC). The utility of AS is, however, limited by the high intrinsic sequencing error rates of NGS technologies and other sources of error such as polymerase amplification or chimera formation. Correcting these errors requires extensive bioinformatic post-processing of NGS data. Amplicon Sequence Assignment (AMPLISAS) is a tool that performs analysis of AS results in a simple and efficient way, while offering customization options for advanced users. AMPLISAS is designed as a three-step pipeline consisting of (i) read demultiplexing, (ii) unique sequence clustering and (iii) erroneous sequence filtering. Allele sequences and frequencies are retrieved in excel spreadsheet format, making them easy to interpret. AMPLISAS performance has been successfully benchmarked against previously published genotyped MHC data sets obtained with various NGS technologies. © 2015 John Wiley & Sons Ltd.
Morphological analysis of red blood cells by polychromatic interference microscopy of thin films
NASA Astrophysics Data System (ADS)
Dyachenko, A. A.; Malinova, L. I.; Ryabukho, V. P.
2016-11-01
Red blood cells (RBC) distribution width (RDW) is a promising hematological parameter with broadapplications in clinical practice; in various studies RDWhas been shown to be associated with increased risk of heart failure (HF) in general population. It predicts mortality and other major adverse events in HF patients. In this report new method of RDWmeasurement is presented. It's based on interference color analysis of red blood cells in blood smear and further measurement of its optical thickness. Descriptive statistics of the of the RBC optical thickness distribution in a blood smear were used for RDW estimation in every studied sample. Proposed method is considered to be avoiding type II errors and minimizing the variability of measured RDW.
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
Han, Hyemin; Glenn, Andrea L
2018-06-01
In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.
1980-02-01
formula for predictinq the number of errors during system testing. The equation he presents is B V/ ECRIT where B is the number of 19 ’R , errors...expected, V is the volume, and ECRIT is "the mean number of elementary discriminations between potential errors in programming" (p. 85). E CRIT can also...prediction of delivered bugs is: "V VX 2 B = V/ ECRIT -3- 13,824 2.3 McCabe’s Complexity Metric Thomas McCabe (1976) defined complexity in relation to
Solid rocket booster performance evaluation model. Volume 2: Users manual
NASA Technical Reports Server (NTRS)
1974-01-01
This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao
2009-01-01
Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650
Kovilakam, Mahesh; Deshler, Terry
2015-08-26
In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovilakam, Mahesh; Deshler, Terry
In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, M. P.; Lawler, J. E.; Den Hartog, E. A.
2014-10-01
New experimental absolute atomic transition probabilities are reported for 203 lines of V II. Branching fractions are measured from spectra recorded using a Fourier transform spectrometer and an echelle spectrometer. The branching fractions are normalized with radiative lifetime measurements to determine the new transition probabilities. Generally good agreement is found between this work and previously reported V II transition probabilities. Two spectrometers, independent radiometric calibration methods, and independent data analysis routines enable a reduction in systematic uncertainties, in particular those due to optical depth errors. In addition, new hyperfine structure constants are measured for selected levels by least squares fittingmore » line profiles in the FTS spectra. The new V II data are applied to high resolution visible and UV spectra of the Sun and metal-poor star HD 84937 to determine new, more accurate V abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. Very good agreement is found between our new solar photospheric V abundance, log ε(V) = 3.95 from 15 V II lines, and the solar-system meteoritic value. In HD 84937, we derive [V/H] = –2.08 from 68 lines, leading to a value of [V/Fe] = 0.24.« less
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-15
... Change To Correct a Typographical Error in Exchange Rule 1080 August 9, 2011. Pursuant to Section 19(b)(1... Rule 1080 (Phlx XL and XL II) to correct a typographical error. The text of the proposed rule change is... in subsection (m)(iii)(D) of Rule 1080. On July 13, 2011, the Exchange filed an immediately effective...
Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors
NASA Technical Reports Server (NTRS)
Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.
1991-01-01
The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.
Yoon, Je Moon; Shin, Dong Hoon; Kim, Sang Jin; Ham, Don-Il; Kang, Se Woong; Chang, Yun Sil; Park, Won Soon
2017-01-01
To investigate the anatomical and refractive outcomes in patients with Type 1 retinopathy of prematurity in Zone I. The medical records of 101 eyes of 51 consecutive infants with Type 1 retinopathy of prematurity in Zone I were analyzed. Infants were treated by conventional laser photocoagulation (Group I), combined intravitreal bevacizumab injection and Zone I sparing laser (Group II), or intravitreal bevacizumab with deferred laser treatment (Group III). The proportion of unfavorable anatomical outcomes including retinal fold, disc dragging, retrolental tissue obscuring the view of the posterior pole, retinal detachment, and early refractive errors were compared among the three groups. The mean gestational age at birth and the birth weight of all 51 infants were 24.3 ± 1.1 weeks and 646 ± 143 g, respectively. In Group I, an unfavorable anatomical outcome was observed in 10 of 44 eyes (22.7%). In contrast, in Groups II and III, all eyes showed favorable anatomical outcomes without reactivation or retreatment. The refractive error was less myopic in Group III than in Groups I and II (spherical equivalent of -4.62 ± 4.00 D in Group I, -5.53 ± 2.21 D in Group II, and -1.40 ± 2.19 D in Group III; P < 0.001). In Type 1 retinopathy of prematurity in Zone I, intravitreal bevacizumab with concomitant or deferred laser therapy yielded a better anatomical outcome than conventional laser therapy alone. Moreover, intravitreal bevacizumab with deferred laser treatment resulted in less myopic refractive error.
Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.
George, Brandon; Aban, Inmaculada
2015-01-15
Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
Fan, Mingyi; Hu, Jiwei; Cao, Rensheng; Xiong, Kangning; Wei, Xionghui
2017-12-21
Reduced graphene oxide-supported nanoscale zero-valent iron (nZVI/rGO) magnetic nanocomposites were prepared and then applied in the Cu(II) removal from aqueous solutions. Scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy and superconduction quantum interference device magnetometer were performed to characterize the nZVI/rGO nanocomposites. In order to reduce the number of experiments and the economic cost, response surface methodology (RSM) combined with artificial intelligence (AI) techniques, such as artificial neural network (ANN), genetic algorithm (GA) and particle swarm optimization (PSO), has been utilized as a major tool that can model and optimize the removal processes, because a tremendous advance has recently been made on AI that may result in extensive applications. Based on RSM, ANN-GA and ANN-PSO were employed to model the Cu(II) removal process and optimize the operating parameters, e.g., operating temperature, initial pH, initial concentration and contact time. The ANN-PSO model was proven to be an effective tool for modeling and optimizing the Cu(II) removal with a low absolute error and a high removal efficiency. Furthermore, the isotherm, kinetic, thermodynamic studies and the XPS analysis were performed to explore the mechanisms of Cu(II) removal process.
Overview of the LARES Mission: orbit, error analysis and technological aspects
NASA Astrophysics Data System (ADS)
Ciufolini, Ignazio; Paolozzi, Antonio; Paris, Claudio
2012-03-01
LARES (LAser RElativity Satellite), is an Italian Space Agency (ASI) mission to be launched beginning of 2012 with the new European launch vehicle, VEGA; the launch opportunity was provided by the European Space Agency (ESA). LARES is a laser ranged satellite; it will be launched into a nearly circular orbit, with an altitude of 1450 km and an inclination of 69.5 degrees. The goal of the mission is the measurement of the Lense-Thirring effect with an uncertainty of few percent; such a small uncertainty will be achieved using LARES data together with data from the LAGEOS I (NASA) and LAGEOS II (NASA and ASI) satellites, and because GRACE mission (NASA-CSR and DLR-GFZ) is improving Earth's gravity field models. This paper describes LARES experiment along with the principal error sources affecting the measurement. Furthermore, some engineering aspects of the mission, in particular the structure and materials of the satellite (designed in order to minimize the non-gravitational perturbations), are described.
Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.
Liu, Siwei; Molenaar, Peter
2016-01-01
This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y.J.; Fessenden, T.
1998-08-17
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
Improvements on the accuracy of beam bugs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Y J; Fessenden, T
1998-09-02
At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughoutmore » linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Cardiel, N.; Gorgas, J.; Peletier, R. F.; Vazdekis, A.; Prada, F.
2001-09-01
A new stellar library at the near-IR spectral region developed for the empirical calibration of the Caii triplet and stellar population synthesis modelling is presented. The library covers the range λλ8348-9020 at 1.5-Å (FWHM) spectral resolution, and consists of 706 stars spanning a wide range in atmospheric parameters. We have defined a new set of near-IR indices, CaT*, CaT and PaT, which mostly overcome the limitations of previous definitions, the former being specially suited for the measurement of the Caii triplet strength corrected for the contamination from Paschen lines. We also present a comparative study of the new and the previous Ca indices, as well as the corresponding transformations between the different systems. A thorough analysis of the sources of index errors and the procedure to calculate them is given. Finally, index and error measurements for the whole stellar library are provided together with the final spectra.
Currency crisis indication by using ensembles of support vector machine classifiers
NASA Astrophysics Data System (ADS)
Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee
2014-07-01
There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.
Oguz, Ensar; Ersoy, Muhammed
2014-01-01
The effects of inlet cobalt(II) concentration (20-60 ppm), feed flow rate (8-19 ml/min) and bed height (5-15 cm), initial solution pH (3-5) and particle size (0.25
Generalized Fourier analyses of the advection-diffusion equation - Part I: one-dimensional domains
NASA Astrophysics Data System (ADS)
Christon, Mark A.; Martinez, Mario J.; Voth, Thomas E.
2004-07-01
This paper presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speed, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis provides an automatic process for separating the discrete advective operator into its symmetric and skew-symmetric components and characterizing the spectral behaviour of each operator. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. It is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, the streamline upwind control-volume method, produce both an artificial diffusivity and a concomitant phase speed adjustment in addition to the usual semi-discrete artifacts observed in the phase speed, group speed and diffusivity. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behaviour in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behaviour. In Part II of this paper, we consider two-dimensional semi-discretizations of the advection-diffusion equation and also assess the affects of grid-induced anisotropy observed in the non-dimensional phase speed, and the discrete and artificial diffusivities. Although this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common analysis framework. Published in 2004 by John Wiley & Sons, Ltd.
A new color vision test to differentiate congenital and acquired color vision defects.
Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak
2007-07-01
To investigate the efficacy of a novel computer-controlled color test for the differentiation of congenital and acquired color vision deficiency. Observational cross-sectional study. Thirty-one patients with congenital color vision deficiency and 134 patients with acquired color vision deficiency with a Snellen visual acuity better than 20/30 underwent an ophthalmologic examination including the Ishihara color test, Hardy-Rand-Rittler test, Nagel anomaloscopy, and the Seohan computerized hue test between June, 2003, and January, 2004. To investigate the type of color vision defect, a graph of the Seohan computerized hue test was divided into 4 quadrants and error scores in each quadrant were summated. The ratio between the sums of error scores of quadrants I and III (Q1+Q3) and those of quadrants II and IV (Q2+Q4) was calculated. Error scores and ratio in quadrant analysis of the Seohan computerized hue test. The Seohan computerized hue test showed that the sum of Q2+Q4 was significantly higher than the sum of Q1+Q3 in congenital color vision deficiency (P<0.01, paired t test) and that the sum of Q2+Q4 was significantly lower than the sum of Q1+Q3 in acquired color vision deficiency (P<0.01, paired t test). In terms of discriminating congenital and acquired color vision deficiency, the ratio in quadrant analysis had 93.3% sensitivity and 98.5% specificity with a reference value of 1.5 by the Seohan computerized hue test (95% confidence interval). The quadrant analysis and ratio of (Q2+Q4)/(Q1+Q3) using the Seohan computerized hue test effectively differentiated congenital and acquired color vision deficiency.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II
Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.
2009-01-01
We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512
Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy
2017-08-31
Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency
European Scientific Notes, Volume 38, Number 9.
1984-09-01
dropped automa- tically from the mailing list. RSN Invites Letters to the Editor ESN publishes selected letters related to developments and policy in... selective sunmmary can be extract- examine trait anxiety or state-trait ed from the Idzikowski-Baddeley litera- interactions. ture review; it appears in... mutism , and stupor are not seen in fliers as they are in ground soldiers. Reid 1945 WW II - Navigation Errors increased over enemy bomber errors coast
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2018-05-01
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.
Aliasing errors in measurements of beam position and ellipticity
NASA Astrophysics Data System (ADS)
Ekdahl, Carl
2005-09-01
Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fellers, R.S.; Braly, L.B.; Saykally, R.J.
The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data. {copyright} {ital 1999 American Institute of Physics.}
Method for transferring data from an unsecured computer to a secured computer
Nilsen, Curt A.
1997-01-01
A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.
Zhang, Ji-Li; Liu, Bo-Fei; Di, Xue-Ying; Chu, Teng-Fei; Jin, Sen
2012-11-01
Taking fuel moisture content, fuel loading, and fuel bed depth as controlling factors, the fuel beds of Mongolian oak leaves in Maoershan region of Northeast China in field were simulated, and a total of one hundred experimental burnings under no-wind and zero-slope conditions were conducted in laboratory, with the effects of the fuel moisture content, fuel loading, and fuel bed depth on the flame length and its residence time analyzed and the multivariate linear prediction models constructed. The results indicated that fuel moisture content had a significant negative liner correlation with flame length, but less correlation with flame residence time. Both the fuel loading and the fuel bed depth were significantly positively correlated with flame length and its residence time. The interactions of fuel bed depth with fuel moisture content and fuel loading had significant effects on the flame length, while the interactions of fuel moisture content with fuel loading and fuel bed depth affected the flame residence time significantly. The prediction model of flame length had better prediction effect, which could explain 83.3% of variance, with a mean absolute error of 7.8 cm and a mean relative error of 16.2%, while the prediction model of flame residence time was not good enough, which could only explain 54% of variance, with a mean absolute error of 9.2 s and a mean relative error of 18.6%.
Meta-analysis identifies a MECOM gene as a novel predisposing factor of osteoporotic fracture
Hwang, Joo-Yeon; Lee, Seung Hun; Go, Min Jin; Kim, Beom-Jun; Kou, Ikuyo; Ikegawa, Shiro; Guo, Yan; Deng, Hong-Wen; Raychaudhuri, Soumya; Kim, Young Jin; Oh, Ji Hee; Kim, Youngdoe; Moon, Sanghoon; Kim, Dong-Joon; Koo, Heejo; Cha, My-Jung; Lee, Min Hye; Yun, Ji Young; Yoo, Hye-Sook; Kang, Young-Ah; Cho, Eun-Hee; Kim, Sang-Wook; Oh, Ki Won; Kang, Moo II; Son, Ho Young; Kim, Shin-Yoon; Kim, Ghi Su; Han, Bok-Ghee; Cho, Yoon Shin; Cho, Myeong-Chan; Lee, Jong-Young; Koh, Jung-Min
2014-01-01
Background Osteoporotic fracture (OF) as a clinical endpoint is a major complication of osteoporosis. To screen for OF susceptibility genes, we performed a genome-wide association study and carried out de novo replication analysis of an East Asian population. Methods Association was tested using a logistic regression analysis. A meta-analysis was performed on the combined results using effect size and standard errors estimated for each study. Results In a combined meta-analysis of a discovery cohort (288 cases and 1139 controls), three hospital based sets in replication stage I (462 cases and 1745 controls), and an independent ethnic group in replication stage II (369 cases and 560 for controls), we identified a new locus associated with OF (rs784288 in the MECOM gene) that showed genome-wide significance (p=3.59×10−8; OR 1.39). RNA interference revealed that a MECOM knockdown suppresses osteoclastogenesis. Conclusions Our findings provide new insights into the genetic architecture underlying OF in East Asians. PMID:23349225
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Optimal Geoid Modelling to determine the Mean Ocean Circulation - Project Overview and early Results
NASA Astrophysics Data System (ADS)
Fecher, Thomas; Knudsen, Per; Bettadpur, Srinivas; Gruber, Thomas; Maximenko, Nikolai; Pie, Nadege; Siegismund, Frank; Stammer, Detlef
2017-04-01
The ESA project GOCE-OGMOC (Optimal Geoid Modelling based on GOCE and GRACE third-party mission data and merging with altimetric sea surface data to optimally determine Ocean Circulation) examines the influence of the satellite missions GRACE and in particular GOCE in ocean modelling applications. The project goal is an improved processing of satellite and ground data for the preparation and combination of gravity and altimetry data on the way to an optimal MDT solution. Explicitly, the two main objectives are (i) to enhance the GRACE error modelling and optimally combine GOCE and GRACE [and optionally terrestrial/altimetric data] and (ii) to integrate the optimal Earth gravity field model with MSS and drifter information to derive a state-of-the art MDT including an error assessment. The main work packages referring to (i) are the characterization of geoid model errors, the identification of GRACE error sources, the revision of GRACE error models, the optimization of weighting schemes for the participating data sets and finally the estimation of an optimally combined gravity field model. In this context, also the leakage of terrestrial data into coastal regions shall be investigated, as leakage is not only a problem for the gravity field model itself, but is also mirrored in a derived MDT solution. Related to (ii) the tasks are the revision of MSS error covariances, the assessment of the mean circulation using drifter data sets and the computation of an optimal geodetic MDT as well as a so called state-of-the-art MDT, which combines the geodetic MDT with drifter mean circulation data. This paper presents an overview over the project results with focus on the geodetic results part.
A Comprehensive Revision of the Logistics Planning Exercise (Log-Plan-X).
1981-06-01
teaching objectives. The difference between conventional teaching methods and simulation rests in the fact that most conventional techniques focus on...Communication and Humanitie. AFIT/LSH, WPAFB OH 45433220 V&. MONITORING AGENCY NAME9 & ADORES(II different fron Ca.U.Ufind Office) is. SECURITY UNCLASSIFIED I...error systems in real life can be very costly. Simulations can be an efficient and effective alternative to such trial and error methods by allowing
Basic Studies on High Pressure Air Plasmas
2006-08-30
which must be added a 1.5 month salary to A. Bugayev for assistance in laser and optic techniques. 2 Part II Technical report Plasma-induced phase shift...two-wavelength heterodyne interferometry applied to atmospheric pressure air plasma 11.1 .A. Plasma-induced phase shift - Electron density...a driver, since the error on the frequency leads to an error on the phase shift. (c) Optical elements Mirrors Protected mirrors must be used to stand
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca
2017-06-01
Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.
Zhang, Yingyu; Shao, Wei; Zhang, Mengjia; Li, Hejun; Yin, Shijiu; Xu, Yingjun
2016-07-01
Mining has been historically considered as a naturally high-risk industry worldwide. Deaths caused by coal mine accidents are more than the sum of all other accidents in China. Statistics of 320 coal mine accidents in Shandong province show that all accidents contain indicators of "unsafe conditions of the rules and regulations" with a frequency of 1590, accounting for 74.3% of the total frequency of 2140. "Unsafe behaviors of the operator" is another important contributory factor, which mainly includes "operator error" and "venturing into dangerous places." A systems analysis approach was applied by using structural equation modeling (SEM) to examine the interactions between the contributory factors of coal mine accidents. The analysis of results leads to three conclusions. (i) "Unsafe conditions of the rules and regulations," affect the "unsafe behaviors of the operator," "unsafe conditions of the equipment," and "unsafe conditions of the environment." (ii) The three influencing factors of coal mine accidents (with the frequency of effect relation in descending order) are "lack of safety education and training," "rules and regulations of safety production responsibility," and "rules and regulations of supervision and inspection." (iii) The three influenced factors (with the frequency in descending order) of coal mine accidents are "venturing into dangerous places," "poor workplace environment," and "operator error." Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
De Muer, D.; De Backer, H.; Zawodny, J. M.; Veiga, R. E.
1990-01-01
The ozone profiles obtained from 24 balloon soundings at Uccle (50 deg 48 min N, 4 deg 21 min E) made with electrochemical ozonesondes were used as correlative data for SAGE II ozone profiles retrieved within a distance of at most 600 km from Uccle. The agreement between the two data sets is in general quite good, especially for profiles nearly coincident in time and space, and during periods of little dynamic activity over the area considered. The percent difference between the ozone column density of the mean balloon and SAGE profile is 4.4 percent (-3.3) percent in the altitude region between 10 and 26 km. From a statistical analysis it appears that there is a small but meaningful difference between the mean profiles at the level of the ozone maximum and around the 30-km level. An error analysis of both data sets give similar results, leading to the conclusion that these differences are instrumentally induced. However, differences between the mean profiles in the lower stratosphere are probably real and due to the high ozone variability in time and space in that altitude region.
Plazas-Nossa, Leonardo; Torres, Andrés
2014-01-01
The objective of this work is to introduce a forecasting method for UV-Vis spectrometry time series that combines principal component analysis (PCA) and discrete Fourier transform (DFT), and to compare the results obtained with those obtained by using DFT. Three time series for three different study sites were used: (i) Salitre wastewater treatment plant (WWTP) in Bogotá; (ii) Gibraltar pumping station in Bogotá; and (iii) San Fernando WWTP in Itagüí (in the south part of Medellín). Each of these time series had an equal number of samples (1051). In general terms, the results obtained are hardly generalizable, as they seem to be highly dependent on specific water system dynamics; however, some trends can be outlined: (i) for UV range, DFT and PCA/DFT forecasting accuracy were almost the same; (ii) for visible range, the PCA/DFT forecasting procedure proposed gives systematically lower forecasting errors and variability than those obtained with the DFT procedure; and (iii) for short forecasting times the PCA/DFT procedure proposed is more suitable than the DFT procedure, according to processing times obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messiaen, A., E-mail: a.messiaen@fz-juelich.de; Ongena, J.; Vervier, M.
2015-12-10
The paper analyses how the phasing of the ITER ICRH 24 strap array evolves from the power sources up to the strap currents of the antenna. The study of the phasing control and coherence through the feeding circuits with prematching and automatic matching and decoupling network is made by modeling starting from the TOPICA matrix of the antenna array for a low coupling plasma profile and for current drive phasing (worst case for mutual coupling effects). The main results of the analysis are: (i) the strap current amplitude is well controlled by the antinode V{sub max} amplitude of the feedingmore » lines, (ii) the best toroidal phasing control is done by the adjustment of the mean phase of V{sub max} of each poloidal straps column, (iii) with well adjusted system the largest strap current phasing error is ±20°, (iv) the effect on load resilience remains well below the maximum affordable VSWR of the generators, (v) the effect on the radiated power spectrum versus k{sub //} computed by means of the coupling code ANTITER II remains small for the considered cases.« less
Conroy, M.J.; Samuel, M.D.; White, Joanne C.
1995-01-01
Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.
Yetilmezsoy, Kaan; Demirel, Sevgi
2008-05-30
A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Homogeneous studies of transiting extrasolar planets - III. Additional planets and stellar models
NASA Astrophysics Data System (ADS)
Southworth, John
2010-11-01
I derive the physical properties of 30 transiting extrasolar planetary systems using a homogeneous analysis of published data. The light curves are modelled with the JKTEBOP code, with special attention paid to the treatment of limb darkening, orbital eccentricity and error analysis. The light from some systems is contaminated by faint nearby stars, which if ignored will systematically bias the results. I show that it is not realistically possible to account for this using only transit light curves: light-curve solutions must be constrained by measurements of the amount of contaminating light. A contamination of 5 per cent is enough to make the measurement of a planetary radius 2 per cent too low. The physical properties of the 30 transiting systems are obtained by interpolating in tabulated predictions from theoretical stellar models to find the best match to the light-curve parameters and the measured stellar velocity amplitude, temperature and metal abundance. Statistical errors are propagated by a perturbation analysis which constructs complete error budgets for each output parameter. These error budgets are used to compile a list of systems which would benefit from additional photometric or spectroscopic measurements. The systematic errors arising from the inclusion of stellar models are assessed by using five independent sets of theoretical predictions for low-mass stars. This model dependence sets a lower limit on the accuracy of measurements of the physical properties of the systems, ranging from 1 per cent for the stellar mass to 0.6 per cent for the mass of the planet and 0.3 per cent for other quantities. The stellar density and the planetary surface gravity and equilibrium temperature are not affected by this model dependence. An external test on these systematic errors is performed by comparing the two discovery papers of the WASP-11/HAT-P-10 system: these two studies differ in their assessment of the ratio of the radii of the components and the effective temperature of the star. I find that the correlations of planetary surface gravity and mass with orbital period have significance levels of only 3.1σ and 2.3σ, respectively. The significance of the latter has not increased with the addition of new data since Paper II. The division of planets into two classes based on Safronov number is increasingly blurred. Most of the objects studied here would benefit from improved photometric and spectroscopic observations, as well as improvements in our understanding of low-mass stars and their effective temperature scale.
Impact of miscommunication in medical dispute cases in Japan.
Aoki, Noriaki; Uda, Kenji; Ohta, Sachiko; Kiuchi, Takahiro; Fukui, Tsuguya
2008-10-01
Medical disputes between physicians and patients can occur in non-negligent circumstances and may even result in compensation. We reviewed medical dispute cases to investigate the impact of miscommunication, especially in non-negligent situations. Systematic review of medical dispute records was done to identify the presence of the adverse events, the type of medical error, preventability, the perception of miscommunication by patients and the amount of compensation. The study was performed in Kyoto, Japan. We analyzed 155 medical dispute cases. We compared (i) frequency of miscommunication cases between negligent and non-negligent cases, and (ii) proportions of positive compensation between non-miscommunication and miscommunication cases stratified according to the existence of negligence. Multivariate logistic analysis was conducted to assess the independent factors related to positive compensation. Approximately 40% of the medical disputes (59/155) did not involve medical error (i.e. non-negligent). In the non-negligent cases, 64.4% (38/59) involved miscommunication, whereas in dispute cases with errors, 21.9% (21/96) involved miscommunications. (P
Uncertainty in predicting soil hydraulic properties at the hillslope scale with indirect methods
NASA Astrophysics Data System (ADS)
Chirico, G. B.; Medina, H.; Romano, N.
2007-02-01
SummarySeveral hydrological applications require the characterisation of the soil hydraulic properties at large spatial scales. Pedotransfer functions (PTFs) are being developed as simplified methods to estimate soil hydraulic properties as an alternative to direct measurements, which are unfeasible for most practical circumstances. The objective of this study is to quantify the uncertainty in PTFs spatial predictions at the hillslope scale as related to the sampling density, due to: (i) the error in estimated soil physico-chemical properties and (ii) PTF model error. The analysis is carried out on a 2-km-long experimental hillslope in South Italy. The method adopted is based on a stochastic generation of patterns of soil variables using sequential Gaussian simulation, conditioned to the observed sample data. The following PTFs are applied: Vereecken's PTF [Vereecken, H., Diels, J., van Orshoven, J., Feyen, J., Bouma, J., 1992. Functional evaluation of pedotransfer functions for the estimation of soil hydraulic properties. Soil Sci. Soc. Am. J. 56, 1371-1378] and HYPRES PTF [Wösten, J.H.M., Lilly, A., Nemes, A., Le Bas, C., 1999. Development and use of a database of hydraulic properties of European soils. Geoderma 90, 169-185]. The two PTFs estimate reliably the soil water retention characteristic even for a relatively coarse sampling resolution, with prediction uncertainties comparable to the uncertainties in direct laboratory or field measurements. The uncertainty of soil water retention prediction due to the model error is as much as or more significant than the uncertainty associated with the estimated input, even for a relatively coarse sampling resolution. Prediction uncertainties are much more important when PTF are applied to estimate the saturated hydraulic conductivity. In this case model error dominates the overall prediction uncertainties, making negligible the effect of the input error.
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
42 CFR 431.992 - Corrective action plan.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS, designed to reduce improper payments in each program based on its analysis of the error causes in... State must take the following actions: (1) Data analysis. States must conduct data analysis such as reviewing clusters of errors, general error causes, characteristics, and frequency of errors that are...
Ravald, L; Fornstedt, T
2001-01-26
The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.
Medicine and aviation: a review of the comparison.
Randell, R
2003-01-01
This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.
Low-dimensional Representation of Error Covariance
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; Cohn, Stephen E.; Todling, Ricardo; Marchesin, Dan
2000-01-01
Ensemble and reduced-rank approaches to prediction and assimilation rely on low-dimensional approximations of the estimation error covariances. Here stability properties of the forecast/analysis cycle for linear, time-independent systems are used to identify factors that cause the steady-state analysis error covariance to admit a low-dimensional representation. A useful measure of forecast/analysis cycle stability is the bound matrix, a function of the dynamics, observation operator and assimilation method. Upper and lower estimates for the steady-state analysis error covariance matrix eigenvalues are derived from the bound matrix. The estimates generalize to time-dependent systems. If much of the steady-state analysis error variance is due to a few dominant modes, the leading eigenvectors of the bound matrix approximate those of the steady-state analysis error covariance matrix. The analytical results are illustrated in two numerical examples where the Kalman filter is carried to steady state. The first example uses the dynamics of a generalized advection equation exhibiting nonmodal transient growth. Failure to observe growing modes leads to increased steady-state analysis error variances. Leading eigenvectors of the steady-state analysis error covariance matrix are well approximated by leading eigenvectors of the bound matrix. The second example uses the dynamics of a damped baroclinic wave model. The leading eigenvectors of a lowest-order approximation of the bound matrix are shown to approximate well the leading eigenvectors of the steady-state analysis error covariance matrix.
Sun, Xiange; Li, Bowei; Qi, Anjin; Tian, Chongguo; Han, Jinglong; Shi, Yajun; Lin, Bingcheng; Chen, Lingxin
2018-02-01
In this work, a novel rotational microfluidic paper-based device was developed to improve the accuracy and performance of the multiplexed colorimetric detection by effectively avoiding the diffusion of colorimetric reagent on the detection zone. The integrated paper-based rotational valves were used to control the connection or disconnection between detection zones and fluid channels. Based on the manipulation of the rotational valves, this rotational paper-based device could prevent the random diffusion of colorimetric reagent and reduce the error of quantitative analysis considerably. The multiplexed colorimetric detection of heavy metals Ni(II), Cu(II) and Cr(VI) were implemented on the rotational device and the detection limits could be found to be 4.8, 1.6, and 0.18mg/L, respectively. The developed rotational device showed the great advantage in improving the detection accuracy and was expected to be a low-cost, portable analytical platform for the on-site detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Error-Analysis for Correctness, Effectiveness, and Composing Procedure.
ERIC Educational Resources Information Center
Ewald, Helen Rothschild
The assumptions underpinning grammatical mistakes can often be detected by looking for patterns of errors in a student's work. Assumptions that negatively influence rhetorical effectiveness can similarly be detected through error analysis. On a smaller scale, error analysis can also reveal assumptions affecting rhetorical choice. Snags in the…
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Bjork, James M; Burroughs, Thomas K; Franke, Laura M; Pickett, Treven C; Johns, Sade E; Moeller, F Gerard; Walker, William C
2016-12-30
In military populations, traumatic brain injury (TBI) also holds potential to increase impulsivity and impair mood regulation due to blast injury effects on ventral frontal cortex - to put military personnel at risk for suicide or substance abuse. We assessed a linkage between depression and impaired behavioral inhibition in 117 blast-exposed service members (SM) and veterans with post-concussion syndrome (PCS), where PCS was defined using a Rivermead Postconcussive Symptom Questionnaire (RPQ) modified to clarify whether each symptom worsened compared to pre-blast. Center for Epidemiological Studies-Depression Scale (CES-D) scores, PTSD Checklist 5 (PCL-5) scores, and RPQ raw subscale scores correlated positively with commission and perseverative errors on the continuous performance test II (CPT-II). In contrast, the number of RPQ symptoms ostensibly worsened post-blast did not correlate with impulsive errors on the CPT-II. These data replicate earlier findings that link increased affective symptomatology to impaired behavior inhibition in military TBI populations, but where additional effects on impulsivity from the blast itself remain equivocal. Published by Elsevier Ireland Ltd.
Resurfacing the Jodrell Bank Mk II radio telescope
NASA Astrophysics Data System (ADS)
Spencer, R. E.; Haggis, J. S.; Morrison, I.; Davis, R. J.; Melling, R. J.
The improvement of the short-wavelength performance of the Jodrell Bank Mk II radio telescope is described. A final rms profile error of 0.6 mm was achieved due to the invention of an inexpensive technique of panel construction and measurement combined with the use of radio-astronomical holographic techniques to measure the telescope under actual operating conditions. Some further improvements to extend the short wavelength performance are suggested.
The impact of response measurement error on the analysis of designed experiments
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
2016-11-01
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
The impact of response measurement error on the analysis of designed experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Hamada, Michael Scott; Burr, Thomas Lee
This study considers the analysis of designed experiments when there is measurement error in the true response or so-called response measurement error. We consider both additive and multiplicative response measurement errors. Through a simulation study, we investigate the impact of ignoring the response measurement error in the analysis, that is, by using a standard analysis based on t-tests. In addition, we examine the role of repeat measurements in improving the quality of estimation and prediction in the presence of response measurement error. We also study a Bayesian approach that accounts for the response measurement error directly through the specification ofmore » the model, and allows including additional information about variability in the analysis. We consider the impact on power, prediction, and optimization. Copyright © 2015 John Wiley & Sons, Ltd.« less
AC orbit bump method of local impedance measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smaluk, Victor; Yang, Xi; Blednykh, Alexei
A fast and precise technique of local impedance measurement has been developed and tested at NSLS-II. This technique is based on in-phase sine-wave (AC) excitation of four fast correctors adjacent to the vacuum chamber section, impedance of which is measured. The beam position is measured using synchronous detection. Use of the narrow-band sine-wave signal allows us to improve significantly the accuracy of the orbit bump method. Beam excitation by fast correctors results in elimination of the systematic error caused by hysteresis effect. The systematic error caused by orbit drift is also eliminated because the measured signal is not affected bymore » the orbit motion outside the excitation frequency range. In this article, the measurement technique is described and the result of proof-of-principle experiment carried out at NSLS-II is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Li; Gu, Chun; Xu, Lixin, E-mail: xulixin@ustc.edu.cn
The self-adapting algorithms are improved to optimize a beam configuration in the direct drive laser fusion system with the solid state lasers. A configuration of 32 laser beams is proposed for achieving a high uniformity illumination, with a root-mean-square deviation at 10{sup −4} level. In our optimization, the parameters such as beam number, beam arrangement, and beam intensity profile are taken into account. The illumination uniformity robustness versus the parameters such as intensity profile deviations, power imbalance, intensity profile noise, the pointing error, and the target position error is also discussed. In this study, the model is assumed a solid-spheremore » illumination, and refraction effects of incident light on the corona are not considered. Our results may have a potential application in the design of the direct-drive laser fusion of the Shen Guang-II Upgrading facility (SG-II-U, China).« less
AC orbit bump method of local impedance measurement
Smaluk, Victor; Yang, Xi; Blednykh, Alexei; ...
2017-08-04
A fast and precise technique of local impedance measurement has been developed and tested at NSLS-II. This technique is based on in-phase sine-wave (AC) excitation of four fast correctors adjacent to the vacuum chamber section, impedance of which is measured. The beam position is measured using synchronous detection. Use of the narrow-band sine-wave signal allows us to improve significantly the accuracy of the orbit bump method. Beam excitation by fast correctors results in elimination of the systematic error caused by hysteresis effect. The systematic error caused by orbit drift is also eliminated because the measured signal is not affected bymore » the orbit motion outside the excitation frequency range. In this article, the measurement technique is described and the result of proof-of-principle experiment carried out at NSLS-II is presented.« less
Association between presenilin-1 polymorphism and maternal meiosis II errors in Down syndrome.
Petersen, M B; Karadima, G; Samaritaki, M; Avramopoulos, D; Vassilopoulos, D; Mikkelsen, M
2000-08-28
Several lines of evidence suggest a shared genetic susceptibility to Down syndrome (DS) and Alzheimer disease (AD). Rare forms of autosomal-dominant AD are caused by mutations in the APP and presenilin genes (PS-1 and PS-2). The presenilin proteins have been localized to the nuclear membrane, kinetochores, and centrosomes, suggesting a function in chromosome segregation. A genetic association between a polymorphism in intron 8 of the PS-1 gene and AD has been described in some series, and an increased risk of AD has been reported in mothers of DS probands. We therefore studied 168 probands with free trisomy 21 of known parental and meiotic origin and their parents from a population-based material, by analyzing the intron 8 polymorphism in the PS-1 gene. An increased frequency of allele 1 in mothers with a meiosis II error (70.8%) was found compared with mothers with a meiosis I error (52.7%, P < 0.01), with an excess of the 11 genotype in the meiosis II mothers. The frequency of allele 1 in mothers carrying apolipoprotein E (APOE) epsilon4 allele (68.0%) was higher than in mothers without epsilon4 (52.2%, P < 0.01). We hypothesize that the PS-1 intronic polymorphism might be involved in chromosomal nondisjunction through an influence on the expression level of PS-1 or due to linkage disequilibrium with biologically relevant polymorphisms in or outside the PS-1 gene. Copyright 2000 Wiley-Liss, Inc.
Cooper, Arthur J. L.; Kuhara, Tomiko
2013-01-01
Glutamine metabolism is generally regarded as proceeding via glutaminase-catalyzed hydrolysis to glutamate and ammonia, followed by conversion of glutamate to α-ketoglutarate catalyzed by glutamate dehydrogenase or by a glutamate-linked aminotransferase (transaminase). However, another pathway exists for the conversion of glutamine to α-ketoglutarate that is often overlooked, but is widely distributed in nature. This pathway, referred to as the glutaminase II pathway, consists of a glutamine transaminase coupled to ω-amidase. Transamination of glutamine results in formation of the corresponding α-keto acid, namely, α-ketoglutaramate (KGM). KGM is hydrolyzed by ω-amidase to α-ketoglutarate and ammonia. The net glutaminase II reaction is: L-Glutamine + α-keto acid + H2O → α-ketoglutarate + L-amino acid + ammonia. In this mini-review the biochemical importance of the glutaminase II pathway is summarized, with emphasis on the key component KGM. Forty years ago it was noted that the concentration of KGM is increased in the cerebrospinal fluid (CSF) of patients with hepatic encephalopathy (HE) and that the level of KGM in the CSF correlates well with the degree of encephalopathy. In more recent work, we have shown that KGM is markedly elevated in the urine of patients with inborn errors of the urea cycle. It is suggested that KGM may be a useful biomarker for many hyperammonemic diseases including hepatic encephalopathy, inborn errors of the urea cycle, citrin deficiency and lysinuric protein intolerance. PMID:24234505
A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II
NASA Astrophysics Data System (ADS)
Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki
The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Hwang, Christopher K.; Hubbard, G. Baker; Hutchinson, Amy K.; Lambert, Scott R.
2014-01-01
Purpose To determine the relative effectiveness, major complications, and refractive errors associated with intravitreal bevacizumab (IVB) versus panretinal photocoagulation (PRP) to treat Type 1 retinopathy of prematurity (ROP). Subjects Consecutive infants with Type 1 ROP who received either IVB or PRP between January 2008 and December 2012 and had at least six months of follow-up. Design Retrospective case series. Methods The data from infants treated with either IVB or PRP for Type 1 ROP between January 2008 and December 2012 were recorded from two medical centers in Atlanta, Georgia. Main Outcome Measures Recurrence rate, complication rate, refractive error. Results A total of 54 eyes (28 patients) with Type 1 ROP were evaluated: 22 eyes (11 patients) received IVB, and 32 eyes (17 patients) received PRP. Among the 22 eyes treated with IVB, 16 eyes had Zone I ROP and 6 eyes had posterior Zone II ROP. The number of Zone I and Zone II ROP eyes treated with PRP were 5 and 27 eyes, respectively. Mean gestational age, birth weight, postmenstrual age at the initial treatment, and follow-up period for the infants receiving IVB were 24.2 weeks, 668.1 grams, 35.1 weeks, and 21.7 weeks, respectively, and for the infants receiving PRP were 24.8, 701.4 grams, 36.1 weeks, and 34.5 weeks, respectively. ROP recurred in 3/22 (14%) IVB-treated eyes and in 1/32 (3%) PRP-treated eyes. None of IVB-treated eyes progressed to retinal detachment or developed macular ectopia. Only one eye went on to retinal detachment and five eyes developed macular ectopia in PRP-treated eyes. Mean spherical equivalent and postgestational age at the last refraction for IVB-treated eyes were −2.4 D and 22.4 months, respectively, and for PRP-treated eyes were −5.3 D and 37.1 months, respectively. Mean spherical equivalent for Zone I ROP eyes treated with IVB and PRP were −3.7 D and −10.1 D, respectively, and for Zone II ROP eyes were 0.6 D and −4.7 D, respectively. Conclusions Both IVB and PRP are effective treatment options for Type 1 ROP with low complication rates. Zone I ROP was associated with high minus refractive errors in eyes treated with either IVB or PRP. PMID:25687024
Performance in population models for count data, part II: a new SAEM algorithm
Savic, Radojka; Lavielle, Marc
2009-01-01
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795
NASA Astrophysics Data System (ADS)
Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.
2017-12-01
Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors
ERIC Educational Resources Information Center
LeBlanc, Judith M.
A sequence of studies compared two types of discrimination formation: errorless learning and trial-and-error procedures. The subjects were three boys and five girls from a university preschool. The children performed the experimental tasks at a typical match-to-sample apparatus with one sample window above and four match (response) windows below.…
NASA Astrophysics Data System (ADS)
Al-Bagawi, A. H.; Ahmad, W.; Saigl, Z. M.; Alwael, H.; Al-Harbi, E. A.; El-Shahawi, M. S.
2017-12-01
The most common problems in spectrophotometric determination of various complex species originate from the background spectral interference. Thus, the present study aimed to overcome the spectral matrix interference for the precise analysis and speciation of mercury(II) in water by dual-wavelength β-correction spectrophotometry using 4-(2-thiazolylazo) resorcinol (TAR) as chromogenic reagent. The principle was based on measuring the correct absorbance for the formed complex of mercury(II) ions with TAR reagent at 547 nm (lambda max). Under optimized conditions, a linear dynamic range of 0.1-2.0 μg mL- 1 with correlation coefficient (R2) of 0.997 were obtained with lower limits of detection (LOD) of 0.024 μg mL- 1 and limit of quantification (LOQ) of 0.081 μg mL- 1. The values of RSD and relative error (RE) obtained for β-correction method and single wavelength spectrophotometry were 1.3, 1.32% and 4.7, 5.9%, respectively. The method was validated in tap and sea water in terms of the data obtained from inductively coupled plasma-optical emission spectrometry (ICP-OES) using student's t and F tests. The developed methodology satisfactorily overcomes the spectral interference in trace determination and speciation of mercury(II) ions in water.
Pecan nutshell as biosorbent to remove Cu(II), Mn(II) and Pb(II) from aqueous solutions.
Vaghetti, Julio C P; Lima, Eder C; Royer, Betina; da Cunha, Bruna M; Cardoso, Natali F; Brasil, Jorge L; Dias, Silvio L P
2009-02-15
In the present study we reported for the first time the feasibility of pecan nutshell (PNS, Carya illinoensis) as an alternative biosorbent to remove Cu(II), Mn(II) and Pb(II) metallic ions from aqueous solutions. The ability of PNS to remove the metallic ions was investigated by using batch biosorption procedure. The effects such as, pH, biosorbent dosage on the adsorption capacities of PNS were studied. Four kinetic models were tested, being the adsorption kinetics better fitted to fractionary-order kinetic model. Besides that, the kinetic data were also fitted to intra-particle diffusion model, presenting three linear regions, indicating that the kinetics of adsorption should follow multiple sorption rates. The equilibrium data were fitted to Langmuir, Freundlich, Sips and Redlich-Peterson isotherm models. Taking into account a statistical error function, the data were best fitted to Sips isotherm model. The maximum biosorption capacities of PNS were 1.35, 1.78 and 0.946mmolg(-1) for Cu(II), Mn(II) and Pb(II), respectively.
Jakobsen, Janus Christian
2014-10-01
Major depressive disorder afflicts an estimated 17% of individuals during their lifetimes at tremendous suffering and costs. Cognitive therapy and psychodynamic therapy may be effective treatment options for major depressive disorder, but the effects have only had limited assessment in systematic reviews. The two modern forms of psychotherapy, "third wave" cognitive therapy and mentalization-based treatment, have both gained some ground as treatments of psychiatric disorders. No randomised trial has compared the effects of these two interventions for major depressive disorder. We performed two systematic reviews with meta-analyses and trial sequential analyses using The Cochrane Collaboration methodology examining the effects of cognitive therapy and psycho-dynamic therapy for major depressive disorder. We developed a thorough treatment protocol for a randomised trial with low risks of bias (systematic error) and low risks of random errors ("play of chance") examining the effects of third wave' cognitive therapy versus mentalization-based treatment for major depressive disorder. We conducted a randomised trial according to good clinical practice examining the effects of "third wave" cognitive therapy versus mentalisation-based treatment for major depressive disorder. The first systematic review included five randomised trials examining the effects of psychodynamic therapy versus "no intervention' for major depressive disorder. Altogether the five trials randomised 365 participants who in each trial received similar antidepressants as co-interventions. All trials had high risk of bias. Four trials assessed "interpersonal psychotherapy" and one trial "short psychodynamic supportive psychotherapy". Both of these interventions are different forms of psychodynamic therapy. Meta-analysis showed that psychodynamic therapy significantly reduced depressive symptoms on the Hamilton Depression Rating Scale (HDRS) compared with "no intervention" (mean difference -3.01 (95% confidence interval -3.98 to -2.03; p = 0.00001), no significant heterogeneity between trials). Trial sequential analysis confirmed this result. The second systematic review included 12 randomised trials examining the effects of cognitive therapy versus "no intervention" for major depressive disorder. Altogether a total of 669 participants were randomised. All trials had high risk of bias. Meta-analysis showed that cognitive therapy significantly reduced depressive symptoms on the HDRS compared with "no intervention" (four trials; mean difference -3.05 (95% confidence interval, -5.23 to -0.87; p = 0.006)). Trial sequential analysis could not confirm this result. The trial protocol showed that it seemed feasible to conduct a randomised trial with low risks of bias and low risks of random errors examining the effects of "third wave" cognitive therapy versus mentalization-based therapy in a setting in the Danish healthcare system. It turned out to be much more difficult to recruit participants in the randomised trial than expected. We only included about half of the planned participants. The results from the randomised trial showed that participants randomised to "third wave" therapy compared with participants randomised to mentalization-based treatment had borderline significantly lower HDRS scores at 18 weeks in an unadjusted analysis (mean difference -4.14 score; 95% CI -8.30 to 0.03; p = 0.051). In the adjusted analysis, the difference was significant (p = 0.039). Five (22.7%) of the participants randomised to "third wave" cognitive therapy had remission at 18 weeks versus none of the participants randomised to mentalization-based treatment (p = 0.049). Sequential analysis showed that these findings could be due to random errors. No significant differences between the two groups was found regarding Beck's Depression Inventory (BDI II), Symptom Checklist 90 Revised (SCL 90-R), and The World Health Organization-Five Well-being Index 1999 (WHO 5). We concluded that cognitive therapy and psychodynamic therapy might be effective interventions for depression measured on HDRS and BDI, but the review results might be erroneous due to risks of bias and random errors. Furthermore, the effects seem relatively small. The trial protocol showed that it was possible to develop a protocol for a randomised trial examining the effects of "third wave" cognitive therapy versus mentalization-based treatment with low risks of bias and low risks of random errors. Our trial results showed that "third wave" cognitive therapy might be a more effective intervention for depressive symptoms measured on the HDRS compared with mentalization-based treatment. The two interventions did not seem to differ significantly regarding BDI II, SCL 90-R, and WHO 5. More randomised trials with low risks of bias and low risks of random errors are needed to assess the effects of cognitive therapy, psychodynamic therapy, "third wave" cognitive therapy, and mentalization-based treatment.
Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten
2013-01-01
Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315
Measurement Error and Equating Error in Power Analysis
ERIC Educational Resources Information Center
Phillips, Gary W.; Jiang, Tao
2016-01-01
Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
McLaughlin, Douglas B
2012-01-01
The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Tomographic inversion of satellite photometry. II
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Hays, P. B.; Abreu, V. J.
1985-01-01
A method for combining nadir observations of emission features in the upper atmosphere with the result of a tomographic inversion of limb brightness measurements is presented. Simulated and actual results are provided, and error sensitivity is investigated.
NASA Technical Reports Server (NTRS)
Stewart, R. D.
1979-01-01
Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.
Effects of Correlated Errors on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, Andres; Jacobs, C. S.
2011-01-01
As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.
NASA Astrophysics Data System (ADS)
Vacanti, Giuseppe; Barrière, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Daniëlle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Krumrey, Michael; Landgraf, Boris; Müller, Peter; Schreiber, Swenja; Vervest, Mark; Wille, Eric
2017-09-01
While predictions based on the metrology (local slope errors and detailed geometrical details) play an essential role in controlling the development of the manufacturing processes, X-ray characterization remains the ultimate indication of the actual performance of Silicon Pore Optics (SPO). For this reason SPO stacks and mirror modules are routinely characterized at PTB's X-ray Pencil Beam Facility at BESSY II. Obtaining standard X-ray results quickly, right after the production of X-ray optics is essential to making sure that X-ray results can inform decisions taken in the lab. We describe the data analysis pipeline in operations at cosine, and how it allows us to go from stack production to full X-ray characterization in 24 hours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-05-13
Here, we propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. Finally, the method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xi; Huang, Xiaobiao
2016-08-01
We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.
Opioid errors in inpatient palliative care services: a retrospective review.
Heneka, Nicole; Shaw, Tim; Rowett, Debra; Lapkin, Samuel; Phillips, Jane L
2018-06-01
Opioids are a high-risk medicine frequently used to manage palliative patients' cancer-related pain and other symptoms. Despite the high volume of opioid use in inpatient palliative care services, and the potential for patient harm, few studies have focused on opioid errors in this population. To (i) identify the number of opioid errors reported by inpatient palliative care services, (ii) identify reported opioid error characteristics and (iii) determine the impact of opioid errors on palliative patient outcomes. A 24-month retrospective review of opioid errors reported in three inpatient palliative care services in one Australian state. Of the 55 opioid errors identified, 84% reached the patient. Most errors involved morphine (35%) or hydromorphone (29%). Opioid administration errors accounted for 76% of reported opioid errors, largely due to omitted dose (33%) or wrong dose (24%) errors. Patients were more likely to receive a lower dose of opioid than ordered as a direct result of an opioid error (57%), with errors adversely impacting pain and/or symptom management in 42% of patients. Half (53%) of the affected patients required additional treatment and/or care as a direct consequence of the opioid error. This retrospective review has provided valuable insights into the patterns and impact of opioid errors in inpatient palliative care services. Iatrogenic harm related to opioid underdosing errors contributed to palliative patients' unrelieved pain. Better understanding the factors that contribute to opioid errors and the role of safety culture in the palliative care service context warrants further investigation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
NASA Technical Reports Server (NTRS)
Weinstein, Bernice
1999-01-01
A strategy for detecting control law calculation errors in critical flight control computers during laboratory validation testing is presented. This paper addresses Part I of the detection strategy which involves the use of modeling of the aircraft control laws and the design of Kalman filters to predict the correct control commands. Part II of the strategy which involves the use of the predicted control commands to detect control command errors is presented in the companion paper.
Orbital-free bond breaking via machine learning
NASA Astrophysics Data System (ADS)
Snyder, John C.; Rupp, Matthias; Hansen, Katja; Blooston, Leo; Müller, Klaus-Robert; Burke, Kieron
2013-12-01
Using a one-dimensional model, we explore the ability of machine learning to approximate the non-interacting kinetic energy density functional of diatomics. This nonlinear interpolation between Kohn-Sham reference calculations can (i) accurately dissociate a diatomic, (ii) be systematically improved with increased reference data and (iii) generate accurate self-consistent densities via a projection method that avoids directions with no data. With relatively few densities, the error due to the interpolation is smaller than typical errors in standard exchange-correlation functionals.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Remington, Paul J.; Walker, Bruce E.
2003-01-01
A test program to demonstrate simplification of Active Noise Control (ANC) systems relative to standard techniques was performed on the NASA Glenn Active Noise Control Fan from May through September 2001. The target mode was the m = 2 circumferential mode generated by the rotor-stator interaction at 2BPF. Seven radials (combined inlet and exhaust) were present at this condition. Several different error-sensing strategies were implemented. Integration of the error-sensors with passive treatment was investigated. These were: (i) an in-duct linear axial array, (ii) an induct steering array, (iii) a pylon-mounted array, and (iv) a near-field boom array. The effect of incorporating passive treatment was investigated as well as reducing the actuator count. These simplified systems were compared to a fully ANC specified system. Modal data acquired using the Rotating Rake are presented for a range of corrected fan rpm. Simplified control has been demonstrated to be possible but requires a well-known and dominant mode signature. The documented results here in are part III of a three-part series of reports with the same base title. Part I and II document the control system and error-sensing design and implementation.
Recombination Line versus Forbidden Line Abundances in Planetary Nebulae
NASA Astrophysics Data System (ADS)
Robertson-Tessi, Mark; Garnett, Donald R.
2005-04-01
Recombination lines (RLs) of C II, N II, and O II in planetary nebulae (PNs) have been found to give abundances that are much larger in some cases than abundances from collisionally excited forbidden lines (CELs). The origins of this abundance discrepancy are highly debated. We present new spectroscopic observations of O II and C II recombination lines for six planetary nebulae. With these data we compare the abundances derived from the optical recombination lines with those determined from collisionally excited lines. Combining our new data with published results on RLs in other PNs, we examine the discrepancy in abundances derived from RLs and CELs. We find that there is a wide range in the measured abundance discrepancy Δ(O+2)=logO+2(RL)-logO+2(CEL), ranging from approximately 0.1 dex (within the 1 σ measurement errors) up to 1.4 dex. This tends to rule out errors in the recombination coefficients as a source of the discrepancy. Most RLs yield similar abundances, with the notable exception of O II multiplet V15, known to arise primarily from dielectronic recombination, which gives abundances averaging 0.6 dex higher than other O II RLs. We compare Δ(O+2) against a variety of physical properties of the PNs to look for clues as to the mechanism responsible for the abundance discrepancy. The strongest correlations are found with the nebula diameter and the Balmer surface brightness; high surface brightness, compact PNs show small values of Δ(O+2), while large low surface brightness PNs show the largest discrepancies. An inverse correlation of Δ(O+2) with nebular density is also seen. A marginal correlation of Δ(O+2) is found with expansion velocity. No correlations are seen with electron temperature, He+2/He+, central star effective temperature and luminosity, stellar mass-loss rate, or nebular morphology. Similar results are found for carbon in comparing C II RL abundances with ultraviolet measurements of C III].
Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, Ronald M.
2015-01-01
The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.
Mohan, Kathleen M; Miller, Joseph M; Harvey, Erin M; Gerhart, Kimberly D; Apple, Howard P; Apple, Deborah; Smith, Jordana M; Davis, Amy L; Leonard-Green, Tina; Campus, Irene; Dennis, Leslie K
2016-01-01
To determine if testing binocular visual acuity in infants and toddlers using the Acuity Card Procedure (ACP) with electronic grating stimuli yields clinically useful data. Participants were infants and toddlers ages 5 to 36.7 months referred by pediatricians due to failed automated vision screening. The ACP was used to test binocular grating acuity. Stimuli were presented on the Dobson Card. The Dobson Card consists of a handheld matte-black plexiglass frame with two flush-mounted tablet computers and is similar in size and form to commercially available printed grating acuity testing stimuli (Teller Acuity Cards II [TACII]; Stereo Optical, Inc., Chicago, IL). On each trial, one tablet displayed a square-wave grating and the other displayed a luminance-matched uniform gray patch. Stimuli were roughly equivalent to the stimuli available in the printed TACII stimuli. After acuity testing, each child received a cycloplegic eye examination. Based on cycloplegic retinoscopy, patients were categorized as having high or low refractive error per American Association for Pediatric Ophthalmology and Strabismus vision screening referral criteria. Mean acuities for high and low refractive error groups were compared using analysis of covariance, controlling for age. Mean visual acuity was significantly poorer in children with high refractive error than in those with low refractive error (P = .015). Electronic stimuli presented using the ACP can yield clinically useful measurements of grating acuity in infants and toddlers. Further research is needed to determine the optimal conditions and procedures for obtaining accurate and clinically useful automated measurements of visual acuity in infants and toddlers. Copyright 2016, SLACK Incorporated.
Boquet, Albert J; Cohen, Tara N; Cabrera, Jennifer S; Litzinger, Tracy L; Captain, Kevin A; Fabian, Michael A; Miles, Steven G; Shappell, Scott A
2016-09-09
Historically, health care has relied on error management techniques to measure and reduce the occurrence of adverse events. This study proposes an alternative approach for identifying and analyzing hazardous events. Whereas previous research has concentrated on investigating individual flow disruptions, we maintain the industry should focus on threat windows, or the accumulation of these disruptions. This methodology, driven by the broken windows theory, allows us to identify process inefficiencies before they manifest and open the door for the occurrence of errors and adverse events. Medical human factors researchers observed disruptions during 34 trauma cases at a Level II trauma center. Data were collected during resuscitation and imaging and were classified using a human factors taxonomy: Realizing Improved Patient Care Through Human-Centered Operating Room Design for Threat Window Analysis (RIPCHORD-TWA). Of the 576 total disruptions observed, communication issues were the most prevalent (28%), followed by interruptions and coordination issues (24% each). Issues related to layout (16%), usability (5%), and equipment (2%) comprised the remainder of the observations. Disruptions involving communication issues were more prevalent during resuscitation, whereas coordination problems were observed more frequently during imaging. Rather than solely investigating errors and adverse events, we propose conceptualizing the accumulation of disruptions in terms of threat windows as a means to analyze potential threats to the integrity of the trauma care system. This approach allows for the improved identification of system weaknesses or threats, affording us the ability to address these inefficiencies and intervene before errors and adverse events may occur.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Rehal, Martin Sundström; Fiskaare, Erik; Tjäder, Inga; Norberg, Åke; Rooyackers, Olav; Wernerman, Jan
2016-03-05
Indirect calorimetry allows the determination of energy expenditure in critically ill patients by measuring oxygen consumption (VO2) and carbon dioxide production (VCO2). Recent studies have demonstrated variable performance of "breath-by-breath" instruments compared to mixing chamber technology. The aim of this study was to validate two modern devices (E-sCOVX and Quark RMR) against a reference method (Deltatrac II). Measurements of VO2/VCO2 with the test and reference devices were performed simultaneously over a 20-min period in mechanically ventilated adult intensive care unit patients. Accuracy and precision of instruments were analyzed using Bland-Altman plots. Forty-eight measurements in 22 patients were included for analysis. Both E-sCOVX and Quark RMR overestimated VO2 and VCO2 compared to Deltatrac II, corresponding to a 10% higher mean resting energy expenditure. Limits of agreement of resting energy expenditure within ± 2 standard deviations were ± 461 kcal/24 h (± 21% expressed as percentage error) for ΔE-sCOVX-Deltatrac II and ± 465 kcal/24 h (± 22%) for ΔQuark RMR-Deltatrac II. Both test devices overestimate VO2 and VCO2 compared to Deltatrac II. The observed limits of agreement are comparable to those commonly accepted in evaluations of circulatory monitoring, and significantly less than results from predictive equations. We hypothesize that the discrepancy between methods is due to patient/ventilator-related factors that affect the synchronization of gas and spirometry waveforms. Australian New Zealand Clinical Trials Registry, Trial ID ACTRN12615000205538. Date registered 3 March 2015.
Optimal Sampling to Provide User-Specific Climate Information.
NASA Astrophysics Data System (ADS)
Panturat, Suwanna
The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.
Barnabe, Christian; Buitrago, Rosio; Bremond, Philippe; Aliaga, Claudia; Salas, Renata; Vidaurre, Pablo; Herrera, Claudia; Cerqueira, Frédérique; Bosseno, Marie-France; Waleckx, Etienne; Breniere, Simone Frédérique
2013-01-01
Trypanosoma cruzi, the causative agent of Chagas disease, is subdivided into six discrete typing units (DTUs; TcI–TcVI) of which TcI is ubiquitous and genetically highly variable. While clonality is the dominant mode of propagation, recombinant events play a significant evolutive role. Recently, foci of wild Triatoma infestans have been described in Bolivia, mainly infected by TcI. Hence, for the first time, we evaluated the level of genetic exchange within TcI natural potentially panmictic populations (single DTU, host, area and sampling time). Seventy-nine TcI stocks from wild T. infestans, belonging to six populations were characterized at eight microsatellite loci. For each population, Hardy-Weinberg equilibrium (HWE), linkage disequilibrium (LD), and presence of repeated multilocus genotypes (MLG) were analyzed by using a total of seven statistics, to test the null hypothesis of panmixia (H0). For three populations, none of the seven statistics allowed to rejecting H0; for another one the low size did not allow us to conclude, and for the two others the tests have given contradictory results. Interestingly, apparent panmixia was only observed in very restricted areas, and was not observed when grouping populations distant of only two kilometers or more. Nevertheless it is worth stressing that for the statistic tests of "HWE", in order to minimize the type I error (i. e. incorrect rejection of a true H0), we used the Bonferroni correction (BC) known to considerably increase the type II error ( i. e. failure to reject a false H0). For the other tests (LD and MLG), we did not use BC and the risk of type II error in these cases was acceptable. Thus, these results should be considered as a good indicator of the existence of panmixia in wild environment but this must be confirmed on larger samples to reduce the risk of type II error. PMID:24312410
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2012 CFR
2012-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2014 CFR
2014-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
21 CFR 1040.11 - Specific purpose laser products.
Code of Federal Regulations, 2013 CFR
2013-04-01
... radiation intended for irradiation of the human body. Such means may have an error in measurement of no more... IIIa; and (ii) Used for relative positioning of the human body; and (iii) Not used for irradiation of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, S.; Jark, W.; Takacs, P.Z.
1995-02-01
Metrology requirements for optical components for third generation synchrotron sources are taxing the state-of-the-art in manufacturing technology. We have investigated a number of effect sources in a commercial figure measurement instrument, the Long Trace Profiler II (LTP II), and have demonstrated that, with some simple modifications, we can significantly reduce the effect of error sources and improve the accuracy and reliability of the measurement. By keeping the optical head stationary and moving a penta prism along the translation stage, the stability of the optical system is greatly improved, and the remaining error signals can be corrected by a simple referencemore » beam subtraction. We illustrate the performance of the modified system by investigating the distortion produced by gravity on a typical synchrotron mirror and demonstrate the repeatability of the instrument despite relaxed tolerances on the translation stage.« less
Sairanen, V; Kuusela, L; Sipilä, O; Savolainen, S; Vanhatalo, S
2017-02-15
Diffusion Tensor Imaging (DTI) is commonly challenged by subject motion during data acquisition, which often leads to corrupted image data. Currently used procedure in DTI analysis is to correct or completely reject such data before tensor estimations, however assessing the reliability and accuracy of the estimated tensor in such situations has evaded previous studies. This work aims to define the loss of data accuracy with increasing image rejections, and to define a robust method for assessing reliability of the result at voxel level. We carried out simulations of every possible sub-scheme (N=1,073,567,387) of Jones30 gradient scheme, followed by confirming the idea with MRI data from four newborn and three adult subjects. We assessed the relative error of the most commonly used tensor estimates for DTI and tractography studies, fractional anisotropy (FA) and the major orientation vector (V1), respectively. The error was estimated using two measures, the widely used electric potential (EP) criteria as well as the rotationally variant condition number (CN). Our results show that CN and EP are comparable in situations with very few rejections, but CN becomes clearly more sensitive to depicting errors when more gradient vectors and images were rejected. The error in FA and V1 was also found depend on the actual FA level in the given voxel; low actual FA levels were related to high relative errors in the FA and V1 estimates. Finally, the results were confirmed with clinical MRI data. This showed that the errors after rejections are, indeed, inhomogeneous across brain regions. The FA and V1 errors become progressively larger when moving from the thick white matter bundles towards more superficial subcortical structures. Our findings suggest that i) CN is a useful estimator of data reliability at voxel level, and ii) DTI preprocessing with data rejections leads to major challenges when assessing brain tissue with lower FA levels, such as all newborn brain, as well as the adult superficial, subcortical areas commonly traced in precise connectivity analyses between cortical regions. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.
2015-04-01
The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross-sectional averaging and the use of shorter reach lengths) and higher water-surface slopes (reducing the proportional impact of slope errors on discharge calculation).
Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class
NASA Astrophysics Data System (ADS)
Novitasari, N.; Lukito, A.; Ekawati, R.
2018-01-01
A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.
Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.
2017-01-01
Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
NASA Astrophysics Data System (ADS)
Gilles, Luc; Wang, Lianqi; Ellerbroek, Brent
2008-07-01
This paper describes the modeling effort undertaken to derive the wavefront error (WFE) budget for the Narrow Field Infrared Adaptive Optics System (NFIRAOS), which is the facility, laser guide star (LGS), dual-conjugate adaptive optics (AO) system for the Thirty Meter Telescope (TMT). The budget describes the expected performance of NFIRAOS at zenith, and has been decomposed into (i) first-order turbulence compensation terms (120 nm on-axis), (ii) opto-mechanical implementation errors (84 nm), (iii) AO component errors and higher-order effects (74 nm) and (iv) tip/tilt (TT) wavefront errors at 50% sky coverage at the galactic pole (61 nm) with natural guide star (NGS) tip/tilt/focus/astigmatism (TTFA) sensing in J band. A contingency of about 66 nm now exists to meet the observatory requirement document (ORD) total on-axis wavefront error of 187 nm, mainly on account of reduced TT errors due to updated windshake modeling and a low read-noise NGS wavefront sensor (WFS) detector. A detailed breakdown of each of these top-level terms is presented, together with a discussion on its evaluation using a mix of high-order zonal and low-order modal Monte Carlo simulations.
Nonspinning numerical relativity waveform surrogates: assessing the model
NASA Astrophysics Data System (ADS)
Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel
2015-04-01
Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.
Role of memory errors in quantum repeaters
NASA Astrophysics Data System (ADS)
Hartmann, L.; Kraus, B.; Briegel, H.-J.; Dür, W.
2007-03-01
We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
Hansson, Jonny; Månsson, Lars Gunnar; Båth, Magnus
2016-06-01
The purpose of the present work was to investigate the validity of using single-reader-adapted receiver operating characteristics (ROC) software for analysis of visual grading characteristics (VGC) data. VGC data from four published VGC studies on optimisation of X-ray examinations, previously analysed using ROCFIT, were reanalysed using a recently developed software dedicated to VGC analysis (VGC Analyzer), and the outcomes [the mean and 95 % confidence interval (CI) of the area under the VGC curve (AUCVGC) and the p-value] were compared. The studies included both paired and non-paired data and were reanalysed both for the fixed-reader and the random-reader situations. The results showed good agreement between the softwares for the mean AUCVGC For non-paired data, wider CIs were obtained with VGC Analyzer than previously reported, whereas for paired data, the previously reported CIs were similar or even broader. Similar observations were made for the p-values. The results indicate that the use of single-reader-adapted ROC software such as ROCFIT for analysing non-paired VGC data may lead to an increased risk of committing Type I errors, especially in the random-reader situation. On the other hand, the use of ROC software for analysis of paired VGC data may lead to an increased risk of committing Type II errors, especially in the fixed-reader situation. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.
2002-02-01
Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.
NSLS-II BPM System Protection from Rogue Mode Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blednykh, A.; Bach, B.; Borrelli, A.
2011-03-28
Rogue mode RF shielding has been successfully designed and implemented into the production multipole vacuum chambers. In order to avoid systematic errors in the NSLS-II BPM system we introduced frequency shift of HOM's by using RF metal shielding located in the antechamber slot of each multipole vacuum chamber. To satisfy the pumping requirement the face of the shielding has been perforated with roughly 50 percent transparency. It stays clear of synchrotron radiation in each chamber.
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
Prevalence and pattern of prescription errors in a Nigerian kidney hospital.
Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O
2016-12-01
To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.
Influence of Tooth Spacing Error on Gears With and Without Profile Modifications
NASA Technical Reports Server (NTRS)
Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.
2000-01-01
A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.
Dente, Christopher J; Ashley, Dennis W; Dunne, James R; Henderson, Vernon; Ferdinand, Colville; Renz, Barry; Massoud, Romeo; Adamski, John; Hawke, Thomas; Gravlee, Mark; Cascone, John; Paynter, Steven; Medeiros, Regina; Atkins, Elizabeth; Nicholas, Jeffrey M
2016-03-01
Led by the American College of Surgeons Trauma Quality Improvement Program, performance improvement efforts have expanded to regional and national levels. The American College of Surgeons Trauma Quality Improvement Program recommends 5 audit filters to identify records with erroneous data, and the Georgia Committee on Trauma instituted standardized audit filter analysis in all Level I and II trauma centers in the state. Audit filter reports were performed from July 2013 to September 2014. Records were reviewed to determine whether there was erroneous data abstraction. Percent yield was defined as number of errors divided by number of charts captured. Twelve centers submitted complete datasets. During 15 months, 21,115 patient records were subjected to analysis. Audit filter captured 2,901 (14%) records and review yielded 549 (2.5%) records with erroneous data. Audit filter 1 had the highest number of records identified and audit filter 3 had the highest percent yield. Individual center error rates ranged from 0.4% to 5.2%. When comparing quarters 1 and 2 with quarters 4 and 5, there were 7 of 12 centers with substantial decreases in error rates. The most common missed complications were pneumonia, urinary tract infection, and acute renal failure. The most common missed comorbidities were hypertension, diabetes, and substance abuse. In Georgia, the prevalence of erroneous data in trauma registries varies among centers, leading to heterogeneity in data quality, and suggests that targeted educational opportunities exist at the institutional level. Standardized audit filter assessment improved data quality in the majority of participating centers. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.
Dosso, Stan E; Nielsen, Peter L
2002-01-01
This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Technical Reports Server (NTRS)
Laxmanan, V.
1985-01-01
A critical review of the present dendritic growth theories and models is presented. Mathematically rigorous solutions to dendritic growth are found to rely on an ad hoc assumption that dendrites grow at the maximum possible growth rate. This hypothesis is found to be in error and is replaced by stability criteria which consider the conditions under which a dendrite tip advances in a stable fashion in a liquid. The important elements of a satisfactory model for dendritic solidification are summarized and a theoretically consistent model for dendritic growth under an imposed thermal gradient is proposed and described. The model is based on the modification of an analysis due to Burden and Hunt (1974) and predicts correctly in all respects, the transition from a dendritic to a planar interface at both very low and very large growth rates.
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...
2016-02-24
Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less
Kawakami, Hiromasa; Mihara, Takahiro; Nakamura, Nobuhito; Ka, Koui; Goto, Takahisa
2018-01-01
Magnesium has been investigated as an adjuvant for neuraxial anesthesia, but the effect of caudal magnesium on postoperative pain is inconsistent. The aim of this systematic review and meta-analysis was to evaluate the analgesic effect of caudal magnesium. We searched six databases, including trial registration sites. Randomized clinical trials reporting the effect of caudal magnesium on postoperative pain after general anesthesia were eligible. The risk ratio for use of rescue analgesics after surgery was combined using a random-effects model. We also assessed adverse events. The I2 statistic was used to assess heterogeneity. We assessed risk of bias with Cochrane domains. We controlled type I and II errors due to sparse data and repetitive testing with Trial Sequential Analysis. We assessed the quality of evidence with GRADE. Four randomized controlled trials (247 patients) evaluated the need for rescue analgesics. In all four trials, 50 mg of magnesium was administered with caudal ropivacaine. The results suggested that the need for rescue analgesia was reduced significantly by caudal magnesium administration (risk ratio 0.45; 95% confidence interval 0.24-0.86). There was considerable heterogeneity as indicated by an I2 value of 62.5%. The Trial Sequential Analysis-adjusted confidence interval was 0.04-5.55, indicating that further trials are required. The quality of evidence was very low. The rate of adverse events was comparable between treatment groups. Caudal magnesium may reduce the need for rescue analgesia after surgery, but further randomized clinical trials with a low risk of bias and a low risk of random errors are necessary to assess the effect of caudal magnesium on postoperative pain and adverse events. University Hospital Medical Information Network Clinical Trials Registry UMIN000025344.
Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad
2018-05-01
The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.
Liu, Yan; Salvendy, Gavriel
2009-05-01
This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
An Investigation of Wave Propagations in Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
2004-01-01
Analysis of the discontinuous Galerkin method has been carried out for one- and two-dimensional system of hyperbolic equations. Analytical, as well as numerical, properties of wave propagation in a DGM scheme are derived and verified with direct numerical simulations. In addition to a systematic examination of the dissipation and dispersion errors, behaviours of a DG scheme at an interface of two different grid topologies are also studied. Under the same framework, a quantitative discrete analysis of various artificial boundary conditions is also conducted. Progress has been made in numerical boundary condition treatment that is closely related to the application of DGM in aeroacoustics problems. Finally, Fourier analysis of DGM for the Convective diffusion equation has also be studied in connection with the application of DG schemes for the Navier-Stokes equations. This research has resulted in five(5) publications, plus one additional manuscript in preparation, four(4) conference presentations, and three(3) departmental seminars, as summarized in part II. Abstracts of papers are given in part 111 of this report.
NASA Technical Reports Server (NTRS)
Mathews, P. M.; Buffett, Bruce A.; Herring, Thomas A.; Shapiro, Irwin I.
1991-01-01
A treatment is presented of the nutation problem for an oceanless, elastic, spheroidally stratified earth, with the dynamical role of the inner core explicitly included in the formulation. Solving the enlarged system of equations shows that a new almost diurnal eigenfrequency emerges. A rough estimate places it not far from the prograde annual tidal excitation frequency, so that possible resonance effects on nutation amplitudes need careful consideration. Tables are provided that exhibit the sensitivities of various relevant quantities, the eigenfrequencies and the coefficients which appear in the resonance expansion, as well as the nutation amplitudes at important tidal frequencies, to possible errors in the earth parameters which enter the theory set forth. Finally, the analysis of 798 VLBI experiments performed between July 1980 and February 1989 and the determination from this analysis of corrections to selected coefficients in the International Astronomical Union 1980 theory of the nutations of the earth are discussed.
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
NASA Astrophysics Data System (ADS)
Fikri Zanil, Muhamad; Nur Wahidah Nik Hashim, Nik; Azam, Huda
2017-11-01
Psychiatrist currently relies on questionnaires and interviews for psychological assessment. These conservative methods often miss true positives and might lead to death, especially in cases where a patient might be experiencing suicidal predisposition but was only diagnosed as major depressive disorder (MDD). With modern technology, an assessment tool might aid psychiatrist with a more accurate diagnosis and thus hope to reduce casualty. This project will explore on the relationship between speech features of spoken audio signal (reading) in Bahasa Malaysia with the Beck Depression Inventory scores. The speech features used in this project were Power Spectral Density (PSD), Mel-frequency Ceptral Coefficients (MFCC), Transition Parameter, formant and pitch. According to analysis, the optimum combination of speech features to predict BDI-II scores include PSD, MFCC and Transition Parameters. The linear regression approach with sequential forward/backward method was used to predict the BDI-II scores using reading speech. The result showed 0.4096 mean absolute error (MAE) for female reading speech. For male, the BDI-II scores successfully predicted 100% less than 1 scores difference with MAE of 0.098437. A prediction system called Depression Severity Evaluator (DSE) was developed. The DSE managed to predict one out of five subjects. Although the prediction rate was low, the system precisely predict the score within the maximum difference of 4.93 for each person. This demonstrates that the scores are not random numbers.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Proxy-equation paradigm: A strategy for massively parallel asynchronous computations
NASA Astrophysics Data System (ADS)
Mittal, Ankita; Girimaji, Sharath
2017-09-01
Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
Comparison of the biometric formulas used for applanation A-scan ultrasound biometry.
Özcura, Fatih; Aktaş, Serdar; Sağdık, Hacı Murat; Tetikoğlu, Mehmet
2016-10-01
The purpose of the study was to compare the accuracy of various biometric formulas for predicting postoperative refraction determined using applanation A-scan ultrasound. This retrospective comparative study included 485 eyes that underwent uneventful phacoemulsification with intraocular lens (IOL) implantation. Applanation A-scan ultrasound biometry and postoperative manifest refraction were obtained in all eyes. Biometric data were entered into each of the five IOL power calculation formulas: SRK-II, SRK/T, Holladay I, Hoffer Q, and Binkhorst II. All eyes were divided into three groups according to axial length: short (≤22.0 mm), average (22.0-25.0 mm), and long (≥25.0 mm) eyes. The postoperative spherical equivalent was calculated and compared with the predicted refractive error using each biometric formula. The results showed that all formulas had significantly lower mean absolute error (MAE) in comparison with Binkhorst II formula (P < 0.01). The lowest MAE was obtained with the SRK-II for average (0.49 ± 0.40 D) and short (0.67 ± 0.54 D) eyes and the SRK/T for long (0.61 ± 0.50 D) eyes. The highest postoperative hyperopic shift was seen with the SRK-II for average (46.8 %), short (28.1 %), and long (48.4 %) eyes. The highest postoperative myopic shift was seen with the Holladay I for average (66.4 %) and long (71.0 %) eyes and the SRK/T for short eyes (80.6 %). In conclusion, the SRK-II formula produced the lowest MAE in average and short eyes and the SRK/T formula produced the lowest MAE in long eyes. The SRK-II has the highest postoperative hyperopic shift in all eyes. The highest postoperative myopic shift is with the Holladay I for average and long eyes and SRK/T for short eyes.
Microwave scanning beam approach and landing system phased array antenna : volume II
DOT National Transportation Integrated Search
1973-02-01
The use of phased arrays for the proposed landing system (MLS) is discussed. Studies relating to ground reflections, near field focusing, and phased-array errors are presented. Two experimental antennas which were fabricated and tested are described....
49 CFR 193.2509 - Emergency procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... plant; (ii) Potential hazards at the plant, including fires; (iii) Communication and emergency control... plant due to operating malfunctions, structural collapse, personnel error, forces of nature, and activities adjacent to the plant. (b) To adequately handle each type of emergency identified under paragraph...
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bahrdt, J.; Frentrup, W.; Gaupp, A.
BESSY plans to build a SASE-FEL facility for the energy range from 20 eV to 1000 eV. The energy range will be covered by three APPLE II type undulators with a magnetic length of about 60 m each. This paper summarizes the basic parameters of the FEL-undulators. The magnetic design will be presented. A modified APPLE II design will be discussed which provides higher fields at the expense of reduced horizontal access. GENESIS simulations give an estimate on the tolerances for the beam wander and for gap errors.
A novel ETFB mutation in a patient with glutaric aciduria type II.
Sudo, Yosuke; Sasaki, Ayako; Wakabayashi, Takashi; Numakura, Chikahiko; Hayasaka, Kiyoshi
2015-01-01
Glutaric aciduria type II (GAII) is a rare inborn error of metabolism clinically classified into a neonatal-onset form with congenital anomalies, a neonatal-onset form without congenital anomalies and a mild and/or late-onset form (MIM #231680). Here, we report on a GAII patient carrying a homozygous novel c.143_145delAGG (p.Glu48del) mutation in the ETFB gene, who presented with a neonatal-onset form with congenital anomalies and rapidly developed cardiomegaly after birth.
A novel ETFB mutation in a patient with glutaric aciduria type II
Sudo, Yosuke; Sasaki, Ayako; Wakabayashi, Takashi; Numakura, Chikahiko; Hayasaka, Kiyoshi
2015-01-01
Glutaric aciduria type II (GAII) is a rare inborn error of metabolism clinically classified into a neonatal-onset form with congenital anomalies, a neonatal-onset form without congenital anomalies and a mild and/or late-onset form (MIM #231680). Here, we report on a GAII patient carrying a homozygous novel c.143_145delAGG (p.Glu48del) mutation in the ETFB gene, who presented with a neonatal-onset form with congenital anomalies and rapidly developed cardiomegaly after birth. PMID:27081516
Single event upset suspectibility testing of the Xilinx Virtex II FPGA
NASA Technical Reports Server (NTRS)
Carmichael, C.; Swift, C.; Yui, G.
2002-01-01
Heavy ion testing of the Xilinx Virtex II was conducted on the configuration, block RAM and user flip flop cells to determine their static single-event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA was used to reveal L1/e, values (the LET at which the cross section is l/e times the saturation cross-section) and single-event functional-interrupt failures.
NASA Astrophysics Data System (ADS)
Singh, Randhir; Das, Nilima; Kumar, Jitendra
2017-06-01
An effective analytical technique is proposed for the solution of the Lane-Emden equations. The proposed technique is based on the variational iteration method (VIM) and the convergence control parameter h . In order to avoid solving a sequence of nonlinear algebraic or complicated integrals for the derivation of unknown constant, the boundary conditions are used before designing the recursive scheme for solution. The series solutions are found which converges rapidly to the exact solution. Convergence analysis and error bounds are discussed. Accuracy, applicability of the method is examined by solving three singular problems: i) nonlinear Poisson-Boltzmann equation, ii) distribution of heat sources in the human head, iii) second-kind Lane-Emden equation.
Desensitized Optimal Filtering and Sensor Fusion Toolkit
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.
2015-01-01
Analytical Mechanics Associates, Inc., has developed a software toolkit that filters and processes navigational data from multiple sensor sources. A key component of the toolkit is a trajectory optimization technique that reduces the sensitivity of Kalman filters with respect to model parameter uncertainties. The sensor fusion toolkit also integrates recent advances in adaptive Kalman and sigma-point filters for non-Gaussian problems with error statistics. This Phase II effort provides new filtering and sensor fusion techniques in a convenient package that can be used as a stand-alone application for ground support and/or onboard use. Its modular architecture enables ready integration with existing tools. A suite of sensor models and noise distribution as well as Monte Carlo analysis capability are included to enable statistical performance evaluations.
2007-01-01
Purpose Preschool vision screenings often include refractive error or visual acuity (VA) testing to detect amblyopia, as well as alignment testing to detect strabismus. The purpose of this study was to determine the effect of combining screening for eye alignment with screening for refractive error or reduced VA on sensitivity for detection of strabismus, with specificity set at 90% and 94%. Methods Over 3 years, 4040 preschool children were screened in the Vision in Preschoolers (VIP) Study, with different screening tests administered each year. Examinations were performed to identify children with strabismus. The best screening tests for detecting children with any targeted condition were noncycloplegic retinoscopy (NCR), Retinomax autorefractor (Right Manufacturing, Virginia Beach, VA), SureSight Vision Screener (Welch-Allyn, Inc., Skaneateles, NY), and Lea Symbols (Precision Vision, LaSalle, IL and Good-Lite Co., Elgin, IL) and HOTV optotypes VA tests. Analyses were conducted with these tests of refractive error or VA paired with the best tests for detecting strabismus (unilateral cover testing, Random Dot “E” [RDE] and Stereo Smile Test II [Stereo Optical, Inc., Chicago, IL]; and MTI PhotoScreener [PhotoScreener, Inc., Palm Beach, FL]). The change in sensitivity that resulted from combining a test of eye alignment with a test of refractive error or VA was determined with specificity set at 90% and 94%. Results Among the 4040 children, 157 were identified as having strabismus. For screening tests conducted by eye care professionals, the addition of a unilateral cover test to a test of refraction generally resulted in a statistically significant increase (range, 15%–25%) in detection of strabismus. For screening tests administered by trained lay screeners, the addition of Stereo Smile II to SureSight resulted in a statistically significant increase (21%) in sensitivity for detection of strabismus. Conclusions The most efficient and low-cost ways to achieve a statistically significant increase in sensitivity for detection of strabismus were by combining the unilateral cover test with the autorefractor (Retinomax) administered by eye care professionals and by combining Stereo Smile II with SureSight administered by trained lay screeners. The decision of whether to include a test of alignment should be based on the screening program’s goals (e.g., targeted visual conditions) and resources. PMID:17591881
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty
NASA Astrophysics Data System (ADS)
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.
Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Munoz, Cesar A.
2007-01-01
This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.
Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas
ERIC Educational Resources Information Center
Herzberg, Tina
2010-01-01
In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Pagoulatos, N; Edwards, W S; Haynor, D R; Kim, Y
1999-12-01
The use of stereotactic systems has been one of the main approaches for image-based guidance of the surgical tool within the brain. The main limitation of stereotactic systems is that they are based on preoperative images that might become outdated and invalid during the course of surgery. Ultrasound (US) is considered the most practical and cost-effective intraoperative imaging modality, but US images inherently have a low signal-to-noise ratio. Integrating intraoperative US with stereotactic systems has recently been attempted. In this paper, we present a new system for interactively registering two-dimensional US and three-dimensional magnetic resonance (MR) images. This registration is based on tracking the US probe with a dc magnetic position sensor. We have performed an extensive analysis of the errors of our system by using a custom-built phantom. The registration error between the MR and the position sensor space was found to have a mean value of 1.78 mm and a standard deviation of 0.18 mm. The registration error between US and MR space was dependent on the distance of the target point from the US probe face. For a 3.5-MHz phased one-dimensional array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zeroth-order or first-order interpolation. The ease of use and the interactive nature of our system (approximately 6.5 frames/s for 344 x 310 images and first-order interpolation on a Pentium II 450 MHz) demonstrates its potential to be used in the operating room.
Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben
2016-01-01
Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336
Error Analysis: Past, Present, and Future
ERIC Educational Resources Information Center
McCloskey, George
2017-01-01
This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Pightling, Arthur W.; Petronella, Nicholas; Pagotto, Franco
2014-01-01
The wide availability of whole-genome sequencing (WGS) and an abundance of open-source software have made detection of single-nucleotide polymorphisms (SNPs) in bacterial genomes an increasingly accessible and effective tool for comparative analyses. Thus, ensuring that real nucleotide differences between genomes (i.e., true SNPs) are detected at high rates and that the influences of errors (such as false positive SNPs, ambiguously called sites, and gaps) are mitigated is of utmost importance. The choices researchers make regarding the generation and analysis of WGS data can greatly influence the accuracy of short-read sequence alignments and, therefore, the efficacy of such experiments. We studied the effects of some of these choices, including: i) depth of sequencing coverage, ii) choice of reference-guided short-read sequence assembler, iii) choice of reference genome, and iv) whether to perform read-quality filtering and trimming, on our ability to detect true SNPs and on the frequencies of errors. We performed benchmarking experiments, during which we assembled simulated and real Listeria monocytogenes strain 08-5578 short-read sequence datasets of varying quality with four commonly used assemblers (BWA, MOSAIK, Novoalign, and SMALT), using reference genomes of varying genetic distances, and with or without read pre-processing (i.e., quality filtering and trimming). We found that assemblies of at least 50-fold coverage provided the most accurate results. In addition, MOSAIK yielded the fewest errors when reads were aligned to a nearly identical reference genome, while using SMALT to align reads against a reference sequence that is ∼0.82% distant from 08-5578 at the nucleotide level resulted in the detection of the greatest numbers of true SNPs and the fewest errors. Finally, we show that whether read pre-processing improves SNP detection depends upon the choice of reference sequence and assembler. In total, this study demonstrates that researchers should test a variety of conditions to achieve optimal results. PMID:25144537
Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M
1999-10-01
Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
NASA Technical Reports Server (NTRS)
Ni, Jianjun (David)
2012-01-01
This presentation discusses an analysis approach to evaluate the interuser interference for Direct-Sequence Spread-Spectrum (DSSS) Systems for Space Network (SN) Users. Part I of this analysis shows that the correlation property of pseudo noise (PN) sequences is the critical factor which determines the interuser interference performance of the DSSS system. For non-standard DSSS systems in which PN sequence s period is much larger than one data symbol duration, it is the partial-period cross-correlation that determines the system performance. This study reveals through an example that a well-designed PN sequence set (e.g. Gold Sequence, in which the cross-correlation for a whole-period is well controlled) may have non-controlled partial-period cross-correlation which could cause severe interuser interference for a DSSS system. Since the analytical derivation of performance metric (bit error rate or signal-to-noise ratio) based on partial-period cross-correlation is prohibitive, the performance degradation due to partial-period cross-correlation will be evaluated using simulation in Part II of this analysis in the future.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1997-01-01
We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.
Geometric Quality Assessment of LIDAR Data Based on Swath Overlap
NASA Astrophysics Data System (ADS)
Sampath, A.; Heidemann, H. K.; Stensaas, G. L.
2016-06-01
This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1996-01-01
I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity, and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration. I have also developed a new opacity-sampling version of my model atmosphere program called ATLAS12. It recognizes more than 1000 atomic and molecular species, each in up to 10 isotopic forms. It can treat all ions of the elements up through Zn and the first 5 ions of heavier elements up through Es. The elemental and isotopic abundances are treated as variables with depth. The fluxes predicted by ATLAS12 are not accurate in intermediate or narrow bandpass intervals because the sample size is too small. A special stripped version of the spectrum synthesis program SYNTHE is used to generate the surface flux for the converged model using the line data on CD-ROMs 1 and 15. ATLAS12 can be used to produce improved models for Am and Ap stars. It should be very useful for investigating diffusion effects in atmospheres. It can be used to model exciting stars for H II regions with abundances consistent with those of the H II region. These programs and line files will be distributed on CD-ROMs.
CE: Nursing's Evolving Role in Patient Safety.
Kowalski, Sonya L; Anthony, Maureen
2017-02-01
: Background: In its 1999 report To Err Is Human: Building a Safer Health System, the Institute of Medicine (IOM) suggested that between 44,000 and 98,000 Americans die annually as a result of medical errors. The report urged health care institutions to break the silence surrounding such errors and to implement changes that would promote a culture of safety. Our aim in conducting this content analysis of AJN articles was to explore the nurse's historical and contemporary role in promoting patient safety. We chose to focus on AJN because, as the oldest continuously published nursing journal, it provided a unique opportunity for us to view trends in nursing practice over more than 100 years. We reviewed all AJN tables of contents from 1900 through 2015, identifying for inclusion articles with titles that suggested a focus on nursing care, patient safety, or clinical content. We then read and analyzed each of the final 1,086 articles over a period of nine months. Our content analysis indicates that the early articles (from 1900 through 1920) focused on such safety measures as asepsis and the newly understood germ theory. In the 1930s, articles proposed methods for preventing medication errors and encouraged the development of written procedures to standardize care. During World War II, nurse authors identified improved patient survival rates with the use of "shock wards" and recovery rooms. The 1950s saw the emergence of progressive patient care initiatives, through which patients were assigned to various levels of care (intensive, intermediate, self, long-term, or home care) based on patient acuity. The 1960s brought increasingly complex equipment and medication regimens, which created safety problems. Hospital-acquired infections were recognized. Unit-dose medication was instituted in the 1970s. In the next two decades, medication and nursing-procedure safety were emphasized. From 2000 to 2015, articles looked beyond human performance as causes of health care errors to systemic factors, such as poor communication, patient-nurse ratios, provider skill mix, disruptive or inappropriate provider behavior, shift work, and long working hours. Emphasis on patient safety increased as patient care became more complex. As nurses developed a professional identity, they often put a spotlight on safety concerns and solutions. The IOM report, which encouraged research focused on systemic solutions to errors, was instrumental in furthering the very culture of safety that the nursing profession had championed.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis
NASA Technical Reports Server (NTRS)
Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.
2017-01-01
This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.
NASA Astrophysics Data System (ADS)
Cheng, Irene; Zhang, Leiming; Blanchard, Pierrette
2014-10-01
Models describing the partitioning of atmospheric oxidized mercury (Hg(II)) between the gas and fine particulate phases were developed as a function of temperature. The models were derived from regression analysis of the gas-particle partitioning parameters, defined by a partition coefficient (Kp) and Hg(II) fraction in fine particles (fPBM) and temperature data from 10 North American sites. The generalized model, log(1/Kp) = 12.69-3485.30(1/T) (R2 = 0.55; root-mean-square error (RMSE) of 1.06 m3/µg for Kp), predicted the observed average Kp at 7 of the 10 sites. Discrepancies between the predicted and observed average Kp were found at the sites impacted by large Hg sources because the model had not accounted for the different mercury speciation profile and aerosol compositions of different sources. Site-specific equations were also generated from average Kp and fPBM corresponding to temperature interval data. The site-specific models were more accurate than the generalized Kp model at predicting the observations at 9 of the 10 sites as indicated by RMSE of 0.22-0.5 m3/µg for Kp and 0.03-0.08 for fPBM. Both models reproduced the observed monthly average values, except for a peak in Hg(II) partitioning observed during summer at two locations. Weak correlations between the site-specific model Kp or fPBM and observations suggest the role of aerosol composition, aerosol water content, and relative humidity factors on Hg(II) partitioning. The use of local temperature data to parameterize Hg(II) partitioning in the proposed models potentially improves the estimation of mercury cycling in chemical transport models and elsewhere.
Al-Masri, Maram M N; Ajaj, Mowaffak A; Hajeer, Mohammad Y; Al-Eed, Muataz S
2015-08-01
To evaluate the bone thickness and density in the lower incisors' region in orthodontically untreated adults, and to examine any possible relationship between thickness and density in different skeletal patterns using cone-beam computed tomography (CBCT). The CBCT records of 48 patients were obtained from the archive of orthodontic department comprising three groups of malocclusion (class I, II and III) with 16 patients in each group. Using OnDemand 3D software, sagittal sections were made for each lower incisor. Thicknesses and densities were measured at three levels of the root (cervical, middle and apical regions) from the labial and lingual sides. Accuracy and reliability tests were undertaken to assess the intraobserver reliability and to detect systematic error. Pearson correlation coefficients were calculated and one-way analysis of variance (ANOVA) was employed to detect significant differences among the three groups of skeletal malocclusion. Apical buccal thickness (ABT) in the four incisors was higher in class II and I patients than in class III patients (p < 0.05). There were significant differences between buccal and lingual surfaces at the apical and middle regions only in class II and III patients. Statistical differences were found between class I and II patients for the cervical buccal density (CBD) and between class II and III patients for apical buccal density (ABD). Relationship between bone thickness and density values ranged from strong at the cervical regions to weak at the apical regions. Sagittal skeletal patterns affect apical bone thickness and density at buccal surfaces of the four lower incisors' roots. Alveolar bone thickness and density increased from the cervical to the apical regions.
ENVIRONMENTAL EFFECTS ON STAR FORMATION ACTIVITY AT z {approx} 0.9 IN THE COSMOS FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kajisawa, M.; Shioya, Y.; Taniguchi, Y.
2013-05-01
We investigated the fraction of [O II] emitters in galaxies at z {approx} 0.9 as a function of the local galaxy density in the Hubble Space Telescope (HST) COSMOS 2 deg{sup 2} field. [O II] emitters are selected by the narrowband excess technique with the NB711-band imaging data taken with Suprime-Cam on the Subaru telescope. We carefully selected 614 photo-z-selected galaxies with M{sub U3500} < -19.31 at z = 0.901 - 0.920, which includes 195 [O II] emitters, to directly compare the results with our previous study at z {approx} 1.2. We found that the fraction is almost constant atmore » 0.3 Mpc{sup -2} < {Sigma}{sub 10th} < 10 Mpc{sup -2}. We also checked the fraction of galaxies with blue rest-frame colors of NUV - R < 2 in our photo-z-selected sample, and found that the fraction of blue galaxies does not significantly depend on the local density. On the other hand, the semi-analytic model of galaxy formation predicted that the fraction of star-forming galaxies at z {approx} 0.9 decreases with increasing projected galaxy density even if the effects of the projection and the photo-z error in our analysis were taken into account. The fraction of [O II] emitters decreases from {approx}60% at z {approx} 1.2 to {approx}30% at z {approx} 0.9 independent of galaxy environment. The decrease of the [O II] emitter fraction could be explained mainly by the rapid decrease of star formation activity in the universe from z {approx} 1.2 to z {approx} 0.9.« less
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Tolerance analysis of optical telescopes using coherent addition of wavefront errors
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
A Study of Reading Errors Using Goodman's Miscue Analysis and Cloze Procedure.
ERIC Educational Resources Information Center
Farren, Sean N.
A study of 11 boys, aged 12 to 14 with low reading ability, was conducted to discover what kinds of errors they made and whether or not differences might exist between error patterns in silent and oral reading. Miscue analysis was used to test oral reading while cloze procedures were used to test silent reading. Errors were categorized according…
Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."
ERIC Educational Resources Information Center
Moustafa, Margaret Heiss
Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz
2009-01-01
Background Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299 PMID:19409095
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Stanford, Tyman E; Bagley, Christopher J; Solomon, Patty J
2016-01-01
Proteomic matrix-assisted laser desorption/ionisation (MALDI) linear time-of-flight (TOF) mass spectrometry (MS) may be used to produce protein profiles from biological samples with the aim of discovering biomarkers for disease. However, the raw protein profiles suffer from several sources of bias or systematic variation which need to be removed via pre-processing before meaningful downstream analysis of the data can be undertaken. Baseline subtraction, an early pre-processing step that removes the non-peptide signal from the spectra, is complicated by the following: (i) each spectrum has, on average, wider peaks for peptides with higher mass-to-charge ratios ( m / z ), and (ii) the time-consuming and error-prone trial-and-error process for optimising the baseline subtraction input arguments. With reference to the aforementioned complications, we present an automated pipeline that includes (i) a novel 'continuous' line segment algorithm that efficiently operates over data with a transformed m / z -axis to remove the relationship between peptide mass and peak width, and (ii) an input-free algorithm to estimate peak widths on the transformed m / z scale. The automated baseline subtraction method was deployed on six publicly available proteomic MS datasets using six different m/z-axis transformations. Optimality of the automated baseline subtraction pipeline was assessed quantitatively using the mean absolute scaled error (MASE) when compared to a gold-standard baseline subtracted signal. Several of the transformations investigated were able to reduce, if not entirely remove, the peak width and peak location relationship resulting in near-optimal baseline subtraction using the automated pipeline. The proposed novel 'continuous' line segment algorithm is shown to far outperform naive sliding window algorithms with regard to the computational time required. The improvement in computational time was at least four-fold on real MALDI TOF-MS data and at least an order of magnitude on many simulated datasets. The advantages of the proposed pipeline include informed and data specific input arguments for baseline subtraction methods, the avoidance of time-intensive and subjective piecewise baseline subtraction, and the ability to automate baseline subtraction completely. Moreover, individual steps can be adopted as stand-alone routines.
Photoacoustic spectroscopic imaging of intra-tumor heterogeneity and molecular identification
NASA Astrophysics Data System (ADS)
Stantz, Keith M.; Liu, Bo; Cao, Minsong; Reinecke, Dan; Miller, Kathy; Kruger, Robert
2006-02-01
Purpose. To evaluate photoacoustic spectroscopy as a potential imaging modality capable of measuring intra-tumor heterogeneity and spectral features associated with hemoglobin and the molecular probe indocyanine green (ICG). Material and Methods. Immune deficient mice were injected with wildtype and VEGF enhanced MCF-7 breast cancer cells or SKOV3x ovarian cancer cells, which were allowed to grow to a size of 6-12 mm in diameter. Two mice were imaged alive and after euthanasia for (oxy/deoxy)-hemoglobin content. A 0.4 mL volume of 1 μg/mL concentration of ICG was injected into the tail veins of two mice prior to imaging using the photoacoustic computed tomography (PCT) spectrometer (Optosonics, Inc., Indianapolis, IN 46202) scanner. Mouse images were acquired for wavelengths spanning 700-920 nm, after which the major organs were excised, and similarly imaged. A histological study was performed by sectioning the organ and optically imaging the fluorescence distribution. Results. Calibration of PCT-spectroscopy with different samples of oxygenated blood reproduced a hemoglobin dissociation curve consistent with empirical formula with an average error of 5.6%. In vivo PCT determination of SaO II levels within the tumor vascular was measurably tracked, and spatially correlated to the periphery of the tumor. Statistical and systematic errors associated with hypoxia were estimated to be 10 and 13%, respectively. Measured ICG concentrations determined by contrast-differential PCT images in excised organs (tumor, liver) were approximately 0.8 μg/mL, consistent with fluorescent histological results. Also, the difference in the ratio of ICG concentration in the gall bladder-to-vasculature between the mice was consistent with excretion times between the two mice. Conclusion. PCT spectroscopic imaging has shown to be a noninvasive modality capable of imaging intra-tumor heterogeneity of (oxy/deoxy)-hemoglobin and ICG in vivo, with an estimated error in SaO II at 17% and in ICG at 0.8 μg/mL in excised tissue. Ongoing development of spectroscopic analysis techniques, probe development, and calibration techniques are being developed to improve sensitivity to both exogenous molecular probes and (oxy/deoxy)-hemoglobin fraction.
Biomarker for Glycogen Storage Diseases
2017-07-03
Fructose Metabolism, Inborn Errors; Glycogen Storage Disease; Glycogen Storage Disease Type I; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Glycogen Storage Disease Type VI; Glycogen Storage Disease Type VII; Glycogen Storage Disease Type VIII
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
VizieR Online Data Catalog: JHK and IRAC photometry of Sh2-90 YSOs (Samal+, 2014)
NASA Astrophysics Data System (ADS)
Samal, M. R.; Zavagno, A.; Deharveng, L.; Molinari, S.; Ojha, D. K.; Paradis, D.; Tige, J.; Pandey, A. K.; Russeil, D.
2014-03-01
To identify YSOs, we observed the Sh2-90 complex at NIR bands with WIRCAM instrument at the 3.6m CHFT telescope, and supplement these observations with the GLIMPSE point source catalog from Benjamin et al. (2003PASP..115..953B, Cat. II/293). The complex were observed at NIR bands on 2006 July 8 using the WIRCAM camera on the CHFT 3.6m telescope. This table includes photometry of the identified YSOs at NIR and Spitzer-IRAC bands. In the table, the columns one and two give coordinates of the YSOs. The following six columns provide the JHK magnitudes and associated errors obtained in our observations, while the next eight columns list the Spitzer-IRAC magnitudes and associated errors. The last column provides the sequence of the table and the sequence number 1 to 21, 22 to 55 and 56 to 129 corresponds to the Class I, Class II and NIR-excess YSOs, respectively. (1 data file).
Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography
ERIC Educational Resources Information Center
Duranovic, Mirela
2017-01-01
The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.; Keil, Stephen L.; Worden, Simon P.
2014-01-01
Analysis of more than 36 years of time series of seven parameters measured in the NSO/AFRL/Sac Peak K-line monitoring program elucidates five elucidates five components of the variation: (1) the solar cycle (period approx. 11 years), (2) quasi-periodic variations (periods approx 100 days), (3) a broad band stochastic process (wide range of periods), (4) rotational modulation, and (5) random observational errors. Correlation and power spectrum analyses elucidate periodic and aperiodic variation of the chromospheric parameters. Time-frequency analysis illuminates periodic and quasi periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structure (1) and (2) at time scales in the range approx 0.1 - 10 years. These results using only full-disk data further suggest that similar analyses will be useful at detecting and characterizing differential rotation in stars from stellar light-curves such as those being produced by NASA's Kepler observatory. Component (3) consists of variations over a range of timescales, in the manner of a 1/f random noise process. A timedependent Wilson-Bappu effect appears to be present in the solar cycle variations (1), but not in the stochastic process (3). Component (4) characterizes differential rotation of the active regions, and (5) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The recent data suggest that the current cycle is starting late and may be relatively weak. The data analyzed in this paper can be found at the National Solar Observatory web site http://nsosp.nso.edu/cak_mon/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.
NASA Astrophysics Data System (ADS)
Prades, Cristina; García-Olmo, Juan; Romero-Prieto, Tomás; García de Ceca, José L.; López-Luque, Rafael
2010-06-01
The procedures used today to characterize cork plank for the manufacture of cork bottle stoppers continue to be based on a traditional, manual method that is highly subjective. Furthermore, there is no specific legislation regarding cork classification. The objective of this viability study is to assess the potential of near-infrared spectroscopy (NIRS) technology for characterizing cork plank according to the following variables: aspect or visual quality, porosity, moisture and geographical origin. In order to calculate the porosity coefficient, an image analysis program was specifically developed in Visual Basic language for a desktop scanner. A set comprising 170 samples from two geographical areas of Andalusia (Spain) was classified into eight quality classes by visual inspection. Spectra were obtained in the transverse and tangential sections of the cork planks using an NIRSystems 6500 SY II reflectance spectrophotometer. The quantitative calibrations showed cross-validation coefficients of determination of 0.47 for visual quality, 0.69 for porosity and 0.66 for moisture. The results obtained using NIRS technology are promising considering the heterogeneity and variability of a natural product such as cork in spite of the fact that the standard error of cross validation (SECV) in the quantitative analysis is greater than the standard error of laboratory (SEL) for the three variables. The qualitative analysis regarding geographical origin achieved very satisfactory results. Applying these methods in industry will permit quality control procedures to be automated, as well as establishing correlations between the different classification systems currently used in the sector. These methods can be implemented in the cork chain of custody certification and will also provide a certainly more objective tool for assessing the economic value of the product.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
A simplified chair-side remount technique using customized mounting platforms.
Chauhan, Mamta Devendrakumar; Dange, Shankar Pandharinath; Khalikar, Arun Narayan; Vaidya, Smita Padmakar
2012-08-01
Correct occlusal relationships are part of the successful prosthetic treatment for edentulous patients. Fabrication of complete dentures comprises of clinical and laboratory procedures that should be executed accurately for achieving success with fabricated dentures. Errors occurring during the clinical and laboratory procedures of a denture may subsequently lead to the occlusal errors in the final prosthesis. These occlusal errors can be corrected in two ways: i) in patient's mouth ii) by recording new centric relation and remounting dentures on an articulator. The latter method is more feasible because the mobility of denture base on the mucosa in oral cavity does not permit the identification of premature contacts in centric occlusion and tooth guided eccentric excursions. This article describes a modest and effective clinical chair-side remount procedure using customized mounting platforms.
A simplified chair-side remount technique using customized mounting platforms
Dange, Shankar Pandharinath; Khalikar, Arun Narayan; Vaidya, Smita Padmakar
2012-01-01
Correct occlusal relationships are part of the successful prosthetic treatment for edentulous patients. Fabrication of complete dentures comprises of clinical and laboratory procedures that should be executed accurately for achieving success with fabricated dentures. Errors occurring during the clinical and laboratory procedures of a denture may subsequently lead to the occlusal errors in the final prosthesis. These occlusal errors can be corrected in two ways: i) in patient's mouth ii) by recording new centric relation and remounting dentures on an articulator. The latter method is more feasible because the mobility of denture base on the mucosa in oral cavity does not permit the identification of premature contacts in centric occlusion and tooth guided eccentric excursions. This article describes a modest and effective clinical chair-side remount procedure using customized mounting platforms. PMID:22977726
Multilevel Sequential Monte Carlo Samplers for Normalizing Constants
Moral, Pierre Del; Jasra, Ajay; Law, Kody J. H.; ...
2017-08-24
This article considers the sequential Monte Carlo (SMC) approximation of ratios of normalizing constants associated to posterior distributions which in principle rely on continuum models. Therefore, the Monte Carlo estimation error and the discrete approximation error must be balanced. A multilevel strategy is utilized to substantially reduce the cost to obtain a given error level in the approximation as compared to standard estimators. Two estimators are considered and relative variance bounds are given. The theoretical results are numerically illustrated for two Bayesian inverse problems arising from elliptic partial differential equations (PDEs). The examples involve the inversion of observations of themore » solution of (i) a 1-dimensional Poisson equation to infer the diffusion coefficient, and (ii) a 2-dimensional Poisson equation to infer the external forcing.« less
Nowak, Michał S; Goś, Roman; Smigielski, Janusz
2008-01-01
To determine the prevalence of refractive errors in population. A retrospective review of medical examinations for entry to the military service from The Area Military Medical Commission in Lodz. Ophthalmic examinations were performed. We used statistic analysis to review the results. Statistic analysis revealed that refractive errors occurred in 21.68% of the population. The most commen refractive error was myopia. 1) The most commen ocular diseases are refractive errors, especially myopia (21.68% in total). 2) Refractive surgery and contact lenses should be allowed as the possible correction of refractive errors for military service.
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1974-01-01
A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.
Study of chromatic adaptation using memory color matches, Part II: colored illuminants.
Smet, Kevin A G; Zhai, Qiyan; Luo, Ming R; Hanselaer, Peter
2017-04-03
In a previous paper, 12 corresponding color data sets were derived for 4 neutral illuminants using the long-term memory colours of five familiar objects. The data were used to test several linear (one-step and two-step von Kries, RLAB) and nonlinear (Hunt and Nayatani) chromatic adaptation transforms (CAT). This paper extends that study to a total of 156 corresponding color sets by including 9 more colored illuminants: 2 with low and 2 with high correlated color temperatures as well as 5 representing high chroma adaptive conditions. As in the previous study, a two-step von Kries transform whereby the degree of adaptation D is optimized to minimize the DEu'v' prediction errors outperformed all other tested models for both memory color and literature corresponding color sets, whereby prediction errors were lower for the memory color set. Most of the transforms tested, except the two- and one-step von Kries models with optimized D, showed large errors for corresponding color subsets that contained non-neutral adaptive conditions as all of them tended to overestimate the effective degree of adaptation in this study. An analysis of the impact of the sensor space primaries in which the adaptation is performed was found to have little impact compared to that of model choice. Finally, the effective degree of adaptation for the 13 illumination conditions (4 neutral + 9 colored) was successfully modelled using a bivariate Gaussian in a Macleod-Boyton like chromaticity diagram.
Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi
2016-01-01
Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244
Metabolomics as a tool in the identification of dietary biomarkers.
Gibbons, Helena; Brennan, Lorraine
2017-02-01
Current dietary assessment methods including FFQ, 24-h recalls and weighed food diaries are associated with many measurement errors. In an attempt to overcome some of these errors, dietary biomarkers have emerged as a complementary approach to these traditional methods. Metabolomics has developed as a key technology for the identification of new dietary biomarkers and to date, metabolomic-based approaches have led to the identification of a number of putative biomarkers. The three approaches generally employed when using metabolomics in dietary biomarker discovery are: (i) acute interventions where participants consume specific amounts of a test food, (ii) cohort studies where metabolic profiles are compared between consumers and non-consumers of a specific food and (iii) the analysis of dietary patterns and metabolic profiles to identify nutritypes and biomarkers. The present review critiques the current literature in terms of the approaches used for dietary biomarker discovery and gives a detailed overview of the currently proposed biomarkers, highlighting steps needed for their full validation. Furthermore, the present review also evaluates areas such as current databases and software tools, which are needed to advance the interpretation of results and therefore enhance the utility of dietary biomarkers in nutrition research.
NASA Astrophysics Data System (ADS)
El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.
2007-11-01
We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.
THE SPECTRUM OF THORIUM FROM 250 nm TO 5500 nm: RITZ WAVELENGTHS AND OPTIMIZED ENERGY LEVELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redman, Stephen L.; Nave, Gillian; Sansonetti, Craig J.
2014-03-01
We have made precise observations of a thorium-argon hollow cathode lamp emission spectrum in the region between 350 nm and 1175 nm using a high-resolution Fourier transform spectrometer. Our measurements are combined with results from seven previously published thorium line lists to re-optimize the energy levels of neutral, singly, and doubly ionized thorium (Th I, Th II, and Th III). Using the optimized level values, we calculate accurate Ritz wavelengths for 19, 874 thorium lines between 250 nm and 5500 nm (40, 000 cm{sup –1} to 1800 cm{sup –1}). We have also found 102 new thorium energy levels. A systematicmore » analysis of previous measurements in light of our new results allows us to identify and propose corrections for systematic errors in Palmer and Engleman and typographical errors and incorrect classifications in Kerber et al. We also found a large scatter with respect to the thorium line list of Lovis and Pepe. We anticipate that our Ritz wavelengths will lead to improved measurement accuracy for current and future spectrographs that make use of thorium-argon or thorium-neon lamps as calibration standards.« less
Analysis of D0 -> K anti-K X Decays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jessop, Colin P.
2003-06-06
Using data taken with the CLEO II detector, they have studied the decays of the D{sup 0} to K{sup +}K{sup -}, K{sup 0}{bar K}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}, K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}, K{sup +}K{sup -}{pi}{sup 0}. The authors present significantly improved results for B(D{sup 0} {yields} K{sup +}K{sup -}) = (0.454 {+-} 0.028 {+-} 0.035)%, B(D{sup 0} {yields} K{sup 0}{bar K}{sup 0}) = (0.054 {+-} 0.012 {+-} 0.010)% and B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}K{sub S}{sup 0}) = (0.074 {+-} 0.010 {+-} 0.015)% where the first errors are statistical and the second errors aremore » the estimate of their systematic uncertainty. They also present a new upper limit B(D{sup 0} {yields} K{sub S}{sup 0}K{sub S}{sup 0}{pi}{sup 0}) < 0.059% at the 90% confidence level and the first measurement of B(D{sup 0} {yields} K{sup +}K{sup -}{pi}{sup 0}) = (0.14 {+-} 0.04)%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravo, R.R.; Shulman, L.P.; Tharapel, A.T.
The occurrence of multiple aneuploidy is quite rare, and the mechanisms by which it arises have not been well-characterized except in cases of 49,XXXXX and 49,XXXXY. These originate by successive nondisjunction of the X chromosomes in meiosis I and meiosis II, giving rise to a gamete with four X chromosomes. Here, we describe a case of double trisomy involving chromosome 21 and the X chromosome. The 19-year-old patient underwent amniocentesis at 17.5 weeks gestation following a positive serum analyte screen (estimated 1/120 risk of Down syndrome). Ultrasound findings at the time of the procedure were ventricular septal defect, dilated renalmore » calyx, clinodactyly, and a two-vessel cord. Cytogenetic analysis revealed a nonmosaic karyotype of 48,XXX,+21. The couple opted for pregnancy termination. A comfimatory karyotype could not be obtained due to microbial contamination of the products of conception. Therefore, we used a {open_quotes}touch prep{close_quotes} procedure to deposit fetal cells on microscope slides and performed interphase FISH (fluorescence in situ hybridization) to confirm the presence of three X chromosomes and three copies of chromosome 21. Microsatellite polymorphisms in the mother, father, and fetus were used to evaluate segregation of the X and 21 chromosomes. Based on the results obtained with the most centromeric loci, both extra chromosomes arose from nondisjunction in maternal meiosis II. More distal markers showed evidence of recombination in both chromosomes. To our knowledge, this is the first report of a double trisomy arising by this mechanism. Based on our results and those reported for tetrasomy/pentasomy X, we postulate that multiple aneuploidies are more likely to arise by related errors (involving a single chromosome or a single cell division) than by independent errors (in different cell divisions or different gametes).« less
Etzion, Y; Linker, R; Cogan, U; Shmulevich, I
2004-09-01
This study investigates the potential use of attenuated total reflectance spectroscopy in the mid-infrared range for determining protein concentration in raw cow milk. The determination of protein concentration is based on the characteristic absorbance of milk proteins, which includes 2 absorbance bands in the 1500 to 1700 cm(-1) range, known as the amide I and amide II bands, and absorbance in the 1060 to 1100 cm(-1) range, which is associated with phosphate groups covalently bound to casein proteins. To minimize the influence of the strong water band (centered around 1640 cm(-1)) that overlaps with the amide I and amide II bands, an optimized automatic procedure for accurate water subtraction was applied. Following water subtraction, the spectra were analyzed by 3 methods, namely simple band integration, partial least squares (PLS) and neural networks. For the neural network models, the spectra were first decomposed by principal component analysis (PCA), and the neural network inputs were the spectra principal components scores. In addition, the concentrations of 2 constituents expected to interact with the protein (i.e., fat and lactose) were also used as inputs. These approaches were tested with 235 spectra of standardized raw milk samples, corresponding to 26 protein concentrations in the 2.47 to 3.90% (weight per volume) range. The simple integration method led to very poor results, whereas PLS resulted in prediction errors of about 0.22% protein. The neural network approach led to prediction errors of 0.20% protein when based on PCA scores only, and 0.08% protein when lactose and fat concentrations were also included in the model. These results indicate the potential usefulness of Fourier transform infrared/attenuated total reflectance spectroscopy for rapid, possibly online, determination of protein concentration in raw milk.
Tougas, Terrence P; Goodey, Adrian P; Hardwell, Gareth; Mitchell, Jolyon; Lyapustina, Svetlana
2017-02-01
The performance of two quality control (QC) tests for aerodynamic particle size distributions (APSD) of orally inhaled drug products (OIPs) is compared. One of the tests is based on the fine particle dose (FPD) metric currently expected by the European regulators. The other test, called efficient data analysis (EDA), uses the ratio of large particle mass to small particle mass (LPM/SPM), along with impactor sized mass (ISM), to detect changes in APSD for QC purposes. The comparison is based on analysis of APSD data from four products (two different pressurized metered dose inhalers (MDIs) and two dry powder inhalers (DPIs)). It is demonstrated that in each case, EDA is able to detect shifts and abnormalities that FPD misses. The lack of sensitivity on the part of FPD is due to its "aggregate" nature, since FPD is a univariate measure of all particles less than about 5 μm aerodynamic diameter, and shifts or changes within the range encompassed by this metric may go undetected. EDA is thus shown to be superior to FPD for routine control of OIP quality. This finding augments previously reported superiority of EDA compared with impactor stage groupings (favored by US regulators) for incorrect rejections (type I errors) when incorrect acceptances (type II errors) were adjusted to the same probability for both approaches. EDA is therefore proposed as a method of choice for routine quality control of OIPs in both European and US regulatory environments.
NASA Astrophysics Data System (ADS)
Christian, Paul M.
2002-07-01
This paper presents a demonstrated approach to significantly reduce the cost and schedule of non real-time modeling and simulation, real-time HWIL simulation, and embedded code development. The tool and the methodology presented capitalize on a paradigm that has become a standard operating procedure in the automotive industry. The tool described is known as the Aerospace Toolbox, and it is based on the MathWorks Matlab/Simulink framework, which is a COTS application. Extrapolation of automotive industry data and initial applications in the aerospace industry show that the use of the Aerospace Toolbox can make significant contributions in the quest by NASA and other government agencies to meet aggressive cost reduction goals in development programs. The part I of this paper provided a detailed description of the GUI based Aerospace Toolbox and how it is used in every step of a development program; from quick prototyping of concept developments that leverage built-in point of departure simulations through to detailed design, analysis, and testing. Some of the attributes addressed included its versatility in modeling 3 to 6 degrees of freedom, its library of flight test validated library of models (including physics, environments, hardware, and error sources), and its built-in Monte Carlo capability. Other topics that were covered in part I included flight vehicle models and algorithms, and the covariance analysis package, Navigation System Covariance Analysis Tools (NavSCAT). Part II of this series will cover a more in-depth look at the analysis and simulation capability and provide an update on the toolbox enhancements. It will also address how the Toolbox can be used as a design hub for Internet based collaborative engineering tools such as NASA's Intelligent Synthesis Environment (ISE) and Lockheed Martin's Interactive Missile Design Environment (IMD).
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
Basha, Shaik; Jaiswar, Santlal; Jha, Bhavanath
2010-09-01
The biosorption equilibrium isotherms of Ni(II) onto marine brown algae Lobophora variegata, which was chemically-modified by CaCl(2) were studied and modeled. To predict the biosorption isotherms and to determine the characteristic parameters for process design, twenty-three one-, two-, three-, four- and five-parameter isotherm models were applied to experimental data. The interaction among biosorbed molecules is attractive and biosorption is carried out on energetically different sites and is an endothermic process. The five-parameter Fritz-Schluender model gives the most accurate fit with high regression coefficient, R (2) (0.9911-0.9975) and F-ratio (118.03-179.96), and low standard error, SE (0.0902-0.0.1556) and the residual or sum of square error, SSE (0.0012-0.1789) values to all experimental data in comparison to other models. The biosorption isotherm models fitted the experimental data in the order: Fritz-Schluender (five-parameter) > Freundlich (two-parameter) > Langmuir (two-parameter) > Khan (three-parameter) > Fritz-Schluender (four-parameter). The thermodynamic parameters such as DeltaG (0), DeltaH (0) and DeltaS (0) have been determined, which indicates the sorption of Ni(II) onto L. variegata was spontaneous and endothermic in nature.
Evaluation of errors in quantitative determination of asbestos in rock
NASA Astrophysics Data System (ADS)
Baietto, Oliviero; Marini, Paola; Vitaliti, Martina
2016-04-01
The quantitative determination of the content of asbestos in rock matrices is a complex operation which is susceptible to important errors. The principal methodologies for the analysis are Scanning Electron Microscopy (SEM) and Phase Contrast Optical Microscopy (PCOM). Despite the PCOM resolution is inferior to that of SEM, PCOM analysis has several advantages, including more representativity of the analyzed sample, more effective recognition of chrysotile and a lower cost. The DIATI LAA internal methodology for the analysis in PCOM is based on a mild grinding of a rock sample, its subdivision in 5-6 grain size classes smaller than 2 mm and a subsequent microscopic analysis of a portion of each class. The PCOM is based on the optical properties of asbestos and of the liquids with note refractive index in which the particles in analysis are immersed. The error evaluation in the analysis of rock samples, contrary to the analysis of airborne filters, cannot be based on a statistical distribution. In fact for airborne filters a binomial distribution (Poisson), which theoretically defines the variation in the count of fibers resulting from the observation of analysis fields, chosen randomly on the filter, can be applied. The analysis in rock matrices instead cannot lean on any statistical distribution because the most important object of the analysis is the size of the of asbestiform fibers and bundles of fibers observed and the resulting relationship between the weights of the fibrous component compared to the one granular. The error evaluation generally provided by public and private institutions varies between 50 and 150 percent, but there are not, however, specific studies that discuss the origin of the error or that link it to the asbestos content. Our work aims to provide a reliable estimation of the error in relation to the applied methodologies and to the total content of asbestos, especially for the values close to the legal limits. The error assessments must be made through the repetition of the same analysis on the same sample to try to estimate the error on the representativeness of the sample and the error related to the sensitivity of the operator, in order to provide a sufficiently reliable uncertainty of the method. We used about 30 natural rock samples with different asbestos content, performing 3 analysis on each sample to obtain a trend sufficiently representative of the percentage. Furthermore we made on one chosen sample 10 repetition of the analysis to try to define more specifically the error of the methodology.
ERIC Educational Resources Information Center
Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid
2014-01-01
The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…
ERIC Educational Resources Information Center
El-khateeb, Mahmoud M. A.
2016-01-01
The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…
A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation
NASA Astrophysics Data System (ADS)
Zhang, Xubin; Tan, Zhe-Min
2017-04-01
The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .
Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.
2004-01-01
Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.
An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.
1981-03-01
D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U
Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel
2010-05-01
Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
Coplen, Tyler B.
2000-01-01
The reliability and accuracy of isotopic data can be improved by utilizing database software to (i) store information about samples, (ii) store the results of mass spectrometric isotope-ratio analyses of samples, (iii) calculate analytical results using standardized algorithms stored in a database, (iv) normalize stable isotopic data to international scales using isotopic reference materials, and (v) generate multi-sheet paper templates for convenient sample loading of automated mass-spectrometer sample preparation manifolds. Such a database program, the Laboratory Information Management System (LIMS) for Light Stable Isotopes, is presented herein. Major benefits of this system include (i) a dramatic improvement in quality assurance, (ii) an increase in laboratory efficiency, (iii) a reduction in workload due to the elimination or reduction of retyping of data by laboratory personnel, and (iv) a decrease in errors in data reported to sample submitters. Such a database provides a complete record of when and how often laboratory reference materials have been analyzed and provides a record of what correction factors have been used through time. It provides an audit trail for laboratories. LIMS for Light Stable Isotopes is available for both Microsoft Office 97 Professional and Microsoft Office 2000 Professional as versions 7 and 8, respectively. Both source code (mdb file) and precompiled executable files (mde) are available. Numerous improvements have been made for continuous flow isotopic analysis in this version (specifically 7.13 for Microsoft Access 97 and 8.13 for Microsoft Access 2000). It is much easier to import isotopic results from Finnigan ISODAT worksheets, even worksheets on which corrections for amount of sample (linearity corrections) have been added. The capability to determine blank corrections using isotope mass balance from analyses of elemental analyzer samples has been added. It is now possible to calculate and apply drift corrections to isotopic data based on the time of day of analysis. Whereas Finnigan ISODAT software is confined to using only a single peak for calculating delta values, LIMS now enables one to use the mean of two or more reference injections during a continuous flow analysis to calculate delta values. This is useful with Finnigan?s GasBench II online sample preparation system. Concentrations of carbon, nitrogen, and sulfur can be calculated based one or more isotopic reference materials analyzed with a group of samples. Both sample data and isotopic analysis data can now be exported to Excel files. A calculator for determining the amount of sample needed for isotopic analysis based on a previous amount of sample and continuous flow area is now an integral part of LIMS for Light Stable Isotopes. LIMS for Light Stable Isotopes can now assign an error code to Finnigan elemental analyzer analyses in which one of the electrometers has saturated due to analysis of too much sample material, giving rise to incorrect isotopic abundances. Information on downloading this report and downloading code and databases is provided at the Internet addresses: http://water.usgs.gov/software/geochemical.html or http://www.geogr.uni-jena.de/software/geochemical.html in the Eastern Hemisphere.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1993-01-01
The first year's effort on NASA Grant NAG5-2006 was an investigation to characterize typical errors resulting from the EOS dorn link. The analysis methods developed for this effort were used on test data from a March 1992 White Sands Terminal Test. The effectiveness of a concatenated coding scheme of a Reed Solomon outer code and a convolutional inner code versus a Reed Solomon only code scheme has been investigated as well as the effectiveness of a Periodic Convolutional Interleaver in dispersing errors of certain types. The work effort consisted of development of software that allows simulation studies with the appropriate coding schemes plus either simulated data with errors or actual data with errors. The software program is entitled Communication Link Error Analysis (CLEAN) and models downlink errors, forward error correcting schemes, and interleavers.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
VizieR Online Data Catalog: Empirical calibration of the near-IR Ca triplet (Cenarro+ 2001)
NASA Astrophysics Data System (ADS)
Cenarro; A. J.; Cardiel; N.; Gorgas; J.; Peletier; R. F.; Vazdekis; A.; Prada; F.
2001-09-01
File table contains details of the new near-IR stellar library observed to calibrate the Ca II triplet. It includes the indices CaT*, CaT and PaT measured over the final spectra as well as their corresponding errors. The Henry Draper Catalogue number, other names (mainly HR and BD numbers), coordinates, spectral type, luminosity class, apparent magnitude and atmospheric parameters (as derived in Paper II; Cenarro et al., 2001MNRAS.326..981C) are also given. (1 data file).
76 FR 39757 - Filing Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-06
... an optical character recognition process, such a document may contain recognition errors. CAUTION... network speed e-filing of these documents may be difficult. Pursuant to section II(C) above, the Secretary... optical scan format or a typed ``electronic signature,'' e.g., ``/s/Jane Doe.'' (3) In the case of a...
Greenhalgh, T
1997-08-16
It is possible to be seriously misled by taking the statistical competence (and/or the intellectual honesty) of authors for granted. Some common errors committed (deliberately or inadvertently) by the authors of papers are given in the final box.
Teaching Japanese-American Incarceration
ERIC Educational Resources Information Center
Miksch, Karen L.; Ghere, David
2004-01-01
Few events in American history are so universally deplored as the incarceration of Japanese Americans during World War II. The United States government has acknowledged the error and the injustice that resulted with an official Presidential apology and a Congressional disbursement of reparations to the victims of the incarceration policy. The…
Code of Federal Regulations, 2011 CFR
2011-07-01
... loan defaults as well as from other overpayments of educational assistance benefits) or insurance... services furnished in error (§ 17.101(a) of this chapter). (ii) Debts resulting from services furnished in a medical emergency (§ 17.101(b) of this chapter). (iii) Other claims arising in connection with...
12 CFR 205.11 - Procedures for resolving errors.
Code of Federal Regulations, 2010 CFR
2010-01-01
... institution's findings and shall note the consumer's right to request the documents that the institution... transfer; (ii) An incorrect electronic fund transfer to or from the consumer's account; (iii) The omission... made by the financial institution relating to an electronic fund transfer; (v) The consumer's receipt...
When is a failure to replicate not a type II error?
Vasconcelos, Marco; Urcuioli, Peter J; Lionello-DeNolf, Karen M
2007-05-01
Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our conclusion is warranted because (a) the original effect has not been replicated despite multiple attempts to do so and (b) the statement that more extended overtraining may be needed itself suggests that the original effect is not reliable.
When Is a Failure to Replicate Not a Type II Error?
Vasconcelos, Marco; Urcuioli, Peter J; Lionello-DeNolf, Karen M
2007-01-01
Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our conclusion is warranted because (a) the original effect has not been replicated despite multiple attempts to do so and (b) the statement that more extended overtraining may be needed itself suggests that the original effect is not reliable. PMID:17575905
Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta
2010-09-01
The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)
2000-01-01
Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.
Consequences of land-cover misclassification in models of impervious surface
McMahon, G.
2007-01-01
Model estimates of impervious area as a function of landcover area may be biased and imprecise because of errors in the land-cover classification. This investigation of the effects of land-cover misclassification on impervious surface models that use National Land Cover Data (NLCD) evaluates the consequences of adjusting land-cover within a watershed to reflect uncertainty assessment information. Model validation results indicate that using error-matrix information to adjust land-cover values used in impervious surface models does not substantially improve impervious surface predictions. Validation results indicate that the resolution of the landcover data (Level I and Level II) is more important in predicting impervious surface accurately than whether the land-cover data have been adjusted using information in the error matrix. Level I NLCD, adjusted for land-cover misclassification, is preferable to the other land-cover options for use in models of impervious surface. This result is tied to the lower classification error rates for the Level I NLCD. ?? 2007 American Society for Photogrammetry and Remote Sensing.
Allan, Bruce D; Hassan, Hala; Ieong, Alvin
2015-05-01
To describe and evaluate a new multiple regression-derived nomogram for myopic wavefront laser in situ keratomileusis (LASIK). Moorfields Eye Hospital, London, United Kingdom. Prospective comparative case series. Multiple regression modeling was used to derive a simplified formula for adjusting attempted spherical correction in myopic LASIK. An adaptation of Thibos' power vector method was then applied to derive adjustments to attempted cylindrical correction in eyes with 1.0 diopter (D) or more of preoperative cylinder. These elements were combined in a new nomogram (nomogram II). The 3-month refractive results for myopic wavefront LASIK (spherical equivalent ≤11.0 D; cylinder ≤4.5 D) were compared between 299 consecutive eyes treated using the earlier nomogram (nomogram I) in 2009 and 2010 and 414 eyes treated using nomogram II in 2011 and 2012. There was no significant difference in treatment accuracy (variance in the postoperative manifest refraction spherical equivalent error) between nomogram I and nomogram II (P = .73, Bartlett test). Fewer patients treated with nomogram II had more than 0.5 D of residual postoperative astigmatism (P = .0001, Fisher exact test). There was no significant coupling between adjustments to the attempted cylinder and the achieved sphere (P = .18, t test). Discarding marginal influences from a multiple regression-derived nomogram for myopic wavefront LASIK had no clinically significant effect on treatment accuracy. Thibos' power vector method can be used to guide adjustments to the treatment cylinder alongside nomograms designed to optimize postoperative spherical equivalent results in myopic LASIK. mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Dimeski, Goce; Masci, Paul P; Trabi, Manuela; Lavin, Martin F; de Jersey, John
2010-05-01
Obtaining a suitable specimen for analysis in a timely manner is pivotal in clinical chemistry service provision. Serum is recognized as the preferred specimen for most assays, but because of time constraints for completion of clotting and an increasing number of patients on anti-coagulant therapy, latent clotting or no clotting is an outcome which can lead to errors and delay in delivery of critical results. Although lithium heparin plasma has unique problems, it has become an alternative in hospital-based laboratories. The Becton-Dickinson (BD) rapid serum tube (RST) was evaluated in a hospital environment using a total of 53 participants, both healthy and anticoagulated, for 31 analytes against BD PST II and BD SST II tubes measured with Beckman DxC800 and DxI800 analyzers. Most results from the RST tube were comparable with those from the SST II tube. Potassium results were closer to the PST II plasma concentrations. Incomplete and latent clotting was encountered in the RST specimens from participants (cardiac and dialysis) who had received a total of >7000 units of heparin [activated partial thromboplastin time (APTT) >150 s], warfarin/heparin combination, and specimens from cardiac surgery patients who had received a total of >25,000 units of heparin (APTT >200 s) at the time of collection of specimens. The RST tube provides a suitable alternative to lithium heparin plasma tubes for most patients in a hospital environment. However, latent clotting continued to occur in specimens collected from participants who had received high concentrations of anticoagulants.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
NASA Astrophysics Data System (ADS)
Prasetya, A. T.; Ridlo, S.
2018-03-01
The purpose of this study is to test the learning motivation of science instruments and compare the learning motivation of science from chemistry and biology teacher candidates. Kuesioner Motivasi Sains (KMS) in Indonesian adoption of the Science Motivation Questionnaire II (SMQ II) consisting of 25 items with a 5-point Likert scale. The number of respondents for the Exploratory Factor Analysis (EFA) test was 312. The Kaiser-Meyer-Olkin (KMO), determinant, Bartlett’s Sphericity, Measures of Sampling Adequacy (MSA) tests against KMS using SPSS 20.0, and Lisrel 8.51 software indicate eligible indications. However testing of Communalities obtained results that there are 4 items not qualified, so the item is discarded. The second test, all parameters of eligibility and has a magnitude of Root Mean Square Error of Approximation (RMSEA), P-Value for the Test of Close Fit (RMSEA <0.05), Goodness of Fit Index (GFI) was good. The new KMS with 21 valid items and composite reliability of 0.9329 can be used to test the level of learning motivation of science which includes Intrinsic Motivation, Sefl-Efficacy, Self-Determination, Grade Motivation and Career Motivation for students who master the Indonesian language. KMS trials of chemistry and biology teacher candidates obtained no significant difference in the learning motivation between the two groups.
Proof of concept and dose estimation with binary responses under model uncertainty.
Klingenberg, B
2009-01-30
This article suggests a unified framework for testing Proof of Concept (PoC) and estimating a target dose for the benefit of a more comprehensive, robust and powerful analysis in phase II or similar clinical trials. From a pre-specified set of candidate models, we choose the ones that best describe the observed dose-response. To decide which models, if any, significantly pick up a dose effect, we construct the permutation distribution of the minimum P-value over the candidate set. This allows us to find critical values and multiplicity adjusted P-values that control the familywise error rate of declaring any spurious effect in the candidate set as significant. Model averaging is then used to estimate a target dose. Popular single or multiple contrast tests for PoC, such as the Cochran-Armitage, Dunnett or Williams tests, are only optimal for specific dose-response shapes and do not provide target dose estimates with confidence limits. A thorough evaluation and comparison of our approach to these tests reveal that its power is as good or better in detecting a dose-response under various shapes with many more additional benefits: It incorporates model uncertainty in PoC decisions and target dose estimation, yields confidence intervals for target dose estimates and extends to more complicated data structures. We illustrate our method with the analysis of a Phase II clinical trial. Copyright (c) 2008 John Wiley & Sons, Ltd.
Saito, Kengo; Peng, Qiling; Qiao, Lin; Wang, Lin; Joutsuka, Tatsuya; Ishiyama, Tatsuya; Ye, Shen; Morita, Akihiro
2017-03-29
Sum frequency generation (SFG) spectroscopy is widely used to observe molecular orientation at interfaces through a combination of various types of polarization. The present work thoroughly examines the relation between the polarization dependence of SFG signals and the molecular orientation, by comparing SFG measurements and molecular dynamics (MD) simulations of acetonitrile/water solutions. The present SFG experiment and MD simulations yield quite consistent results on the ratios of χ (2) elements, supporting the reliability of both means. However, the subsequent polarization analysis tends to derive more upright tilt angles of acetonitrile than the direct MD calculations. The reasons for discrepancy are examined in terms of three issues; (i) anisotropy of the Raman tensor, (ii) cross-correlation, and (iii) orientational distribution. The analysis revealed that the issues (i) and (iii) are the main causes of errors in the conventional polarization analysis of SFG spectra. In methyl CH stretching, the anisotropy of Raman tensor cannot be estimated from the simple bond polarizability model. The neglect of the orientational distribution is shown to systematically underestimate the tilt angle of acetonitrile. Further refined use of polarization analysis in collaboration with MD simulations should be proposed.
Kim, Kyung Woo; Lim, Ho Chan; Park, Jae Hee; Park, Sang Gyu; Park, Ye Jin; Cho, Hm Hak
2018-06-01
Organizations are pursing complex and diverse aims to generate higher profits. Many workers experience high work intensity such as workload and work pressure in this organizational environment. Especially, psychological burden is a commonly used term in workplace of Republic of Korea. This study focused on defining the psychological burden from the perspective of occupational safety and health and tried to develop a scale for psychological burden. The 48 preliminary questionnaire items for psychological burden were prepared by a focus group interview with 16 workers through the Copenhagen Psychosocial Questionnaire II and Mindful Awareness Attention Scale. The preliminary items were surveyed with 572 workers, and exploratory factor analysis, confirmatory factor analysis, and correlation analysis were conducted for a new scale. As a result of the exploratory factor analysis, five factors were extracted: organizational activity, human error, safety and health workload, work attitude, and negative self-management. These factors had significant correlations and reliability, and the stability of the model for validity was confirmed using confirmatory factor analysis. The developed scale for psychological burden can measure workers' psychological burden in relation to safety and health. Despite some limitations, this study has applicability in the workplace, given the relatively small-sized questionnaire.
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
Adams, Elizabeth J.; Jordan, Thomas J.; Clark, Catharine H.; Nisbet, Andrew
2013-01-01
Quality assurance (QA) for intensity‐ and volumetric‐modulated radiotherapy (IMRT and VMAT) has evolved substantially. In recent years, various commercial 2D and 3D ionization chamber or diode detector arrays have become available, allowing for absolute verification with near real time results, allowing for streamlined QA. However, detector arrays are limited by their resolution, giving rise to concerns about their sensitivity to errors. Understanding the limitations of these devices is therefore critical. In this study, the sensitivity and resolution of the PTW 2D‐ARRAY seven29 and OCTAVIUS II phantom combination was comprehensively characterized for use in dynamic sliding window IMRT and RapidArc verification. Measurement comparisons were made between single acquisition and a multiple merged acquisition techniques to improve the effective resolution of the 2D‐ARRAY, as well as comparisons against GAFCHROMIC EBT2 film and electronic portal imaging dosimetry (EPID). The sensitivity and resolution of the 2D‐ARRAY was tested using two gantry angle 0° modulated test fields. Deliberate multileaf collimator (MLC) errors of 1, 2, and 5 mm and collimator rotation errors were inserted into IMRT and RapidArc plans for pelvis and head & neck sites, to test sensitivity to errors. The radiobiological impact of these errors was assessed to determine the gamma index passing criteria to be used with the 2D‐ARRAY to detect clinically relevant errors. For gamma index distributions, it was found that the 2D‐ARRAY in single acquisition mode was comparable to multiple acquisition modes, as well as film and EPID. It was found that the commonly used gamma index criteria of 3% dose difference or 3 mm distance to agreement may potentially mask clinically relevant errors. Gamma index criteria of 3%/2 mm with a passing threshold of 98%, or 2%/2 mm with a passing threshold of 95%, were found to be more sensitive. We suggest that the gamma index passing thresholds may be used for guidance, but also should be combined with a visual inspection of the gamma index distribution and calculation of the dose difference to assess whether there may be a clinical impact in failed regions. PACS numbers: 87.55.Qr, 87.56.Fc PMID:24257288
Barringer, J.L.; Johnsson, P.A.
1996-01-01
Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.
Effect of the menstrual cycle on voice quality.
Silverman, E M; Zimmer, C H
1978-01-01
The question addressed was whether most young women with no vocal training exhibit premenstrual hoarseness. Spectral (acoustical) analyses of the sustained productions of three vowels produced by 20 undergraduates at and at premenstruation were rated for degree of hoarseness. Statistical analysis of the data indicated that the typical subject was no more hoarse of premenstruation than at ovulation. To determine whether this finding represented a genuine characteristic of women's voices or a type II statistical error, a systematic replication was undertaken with another sample of 27 undergraduates. The finding replicated that of the original investigation, suggesting that premenstrual hoarseness is a rarely occurring condition among young women with no vocal training. The apparent differential effect of the menstrual cycle on trained as opposed to untrained voices deserves systematic investigation.
Yohay Carmel; Curtis Flather; Denis Dean
2006-01-01
This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...
Hartman Testing of X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Biskasch, Michael; Zhang, William W.
2013-01-01
Hartmann testing of x-ray telescopes is a simple test method to retrieve and analyze alignment errors and low-order circumferential errors of x-ray telescopes and their components. A narrow slit is scanned along the circumference of the telescope in front of the mirror and the centroids of the images are calculated. From the centroid data, alignment errors, radius variation errors, and cone-angle variation errors can be calculated. Mean cone angle, mean radial height (average radius), and the focal length of the telescope can also be estimated if the centroid data is measured at multiple focal plane locations. In this paper we present the basic equations that are used in the analysis process. These equations can be applied to full circumference or segmented x-ray telescopes. We use the Optical Surface Analysis Code (OSAC) to model a segmented x-ray telescope and show that the derived equations and accompanying analysis retrieves the alignment errors and low order circumferential errors accurately.