Sample records for type ii errors

  1. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  2. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  3. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  4. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  5. Comparison of the efficacy and technical accuracy of different rectangular collimators for intraoral radiography.

    PubMed

    Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D

    2009-07-01

    The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P

  6. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  7. Planned versus Unplanned Contrasts: Exactly Why Planned Contrasts Tend To Have More Power against Type II Error.

    ERIC Educational Resources Information Center

    Wang, Lin

    The literature is reviewed regarding the difference between planned contrasts, OVA and unplanned contrasts. The relationship between statistical power of a test method and Type I, Type II error rates is first explored to provide a framework for the discussion. The concepts and formulation of contrast, orthogonal and non-orthogonal contrasts are…

  8. Statistical aspects of the TNK-S2B trial of tenecteplase versus alteplase in acute ischemic stroke: an efficient, dose-adaptive, seamless phase II/III design.

    PubMed

    Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke

    2011-08-01

    TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.

  9. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  10. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  11. Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets

    NASA Astrophysics Data System (ADS)

    Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua

    2017-09-01

    In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.

  12. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less

  13. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  14. Current Assessment and Classification of Suicidal Phenomena using the FDA 2012 Draft Guidance Document on Suicide Assessment: A Critical Review.

    PubMed

    Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett

    2014-09-01

    Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.

  15. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  16. Rational Clinical Experiment: Assessing Prior Probability and Its Impact on the Success of Phase II Clinical Trials

    PubMed Central

    Halperin, Daniel M.; Lee, J. Jack; Dagohoy, Cecile Gonzales; Yao, James C.

    2015-01-01

    Purpose Despite a robust clinical trial enterprise and encouraging phase II results, the vast minority of oncologic drugs in development receive regulatory approval. In addition, clinicians occasionally make therapeutic decisions based on phase II data. Therefore, clinicians, investigators, and regulatory agencies require improved understanding of the implications of positive phase II studies. We hypothesized that prior probability of eventual drug approval was significantly different across GI cancers, with substantial ramifications for the predictive value of phase II studies. Methods We conducted a systematic search of phase II studies conducted between 1999 and 2004 and compared studies against US Food and Drug Administration and National Cancer Institute databases of approved indications for drugs tested in those studies. Results In all, 317 phase II trials were identified and followed for a median of 12.5 years. Following completion of phase III studies, eventual new drug application approval rates varied from 0% (zero of 45) in pancreatic adenocarcinoma to 34.8% (24 of 69) for colon adenocarcinoma. The proportion of drugs eventually approved was correlated with the disease under study (P < .001). The median type I error for all published trials was 0.05, and the median type II error was 0.1, with minimal variation. By using the observed median type I error for each disease, phase II studies have positive predictive values ranging from less than 1% to 90%, depending on primary site of the cancer. Conclusion Phase II trials in different GI malignancies have distinct prior probabilities of drug approval, yielding quantitatively and qualitatively different predictive values with similar statistical designs. Incorporation of prior probability into trial design may allow for more effective design and interpretation of phase II studies. PMID:26261263

  17. Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy

    ClinicalTrials.gov

    2017-08-31

    Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency

  18. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  19. Open Label Extension of ISIS 301012 (Mipomersen) to Treat Familial Hypercholesterolemia

    ClinicalTrials.gov

    2016-08-01

    Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders

  20. Sample Size Determination for Rasch Model Tests

    ERIC Educational Resources Information Center

    Draxler, Clemens

    2010-01-01

    This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…

  1. Biomarker for Glycogen Storage Diseases

    ClinicalTrials.gov

    2017-07-03

    Fructose Metabolism, Inborn Errors; Glycogen Storage Disease; Glycogen Storage Disease Type I; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Glycogen Storage Disease Type VI; Glycogen Storage Disease Type VII; Glycogen Storage Disease Type VIII

  2. Exploring effective multiplicity in multichannel functional near-infrared spectroscopy using eigenvalues of correlation matrices

    PubMed Central

    Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju

    2015-01-01

    Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982

  3. Study to Assess the Safety and Efficacy of ISIS 301012 (Mipomersen) in Homozygous Familial Hypercholesterolemia

    ClinicalTrials.gov

    2016-08-01

    Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders

  4. When Is a Failure to Replicate Not a Type II Error?

    ERIC Educational Resources Information Center

    Vasconcelos, Marco; Urcuioli, Peter J.; Lionello-DeNolf, Karen M.

    2007-01-01

    Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our…

  5. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  7. Strong Converse Exponents for a Quantum Channel Discrimination Problem and Quantum-Feedback-Assisted Communication

    NASA Astrophysics Data System (ADS)

    Cooney, Tom; Mosonyi, Milán; Wilde, Mark M.

    2016-06-01

    This paper studies the difficulty of discriminating between an arbitrary quantum channel and a "replacer" channel that discards its input and replaces it with a fixed state. The results obtained here generalize those known in the theory of quantum hypothesis testing for binary state discrimination. We show that, in this particular setting, the most general adaptive discrimination strategies provide no asymptotic advantage over non-adaptive tensor-power strategies. This conclusion follows by proving a quantum Stein's lemma for this channel discrimination setting, showing that a constant bound on the Type I error leads to the Type II error decreasing to zero exponentially quickly at a rate determined by the maximum relative entropy registered between the channels. The strong converse part of the lemma states that any attempt to make the Type II error decay to zero at a rate faster than the channel relative entropy implies that the Type I error necessarily converges to one. We then refine this latter result by identifying the optimal strong converse exponent for this task. As a consequence of these results, we can establish a strong converse theorem for the quantum-feedback-assisted capacity of a channel, sharpening a result due to Bowen. Furthermore, our channel discrimination result demonstrates the asymptotic optimality of a non-adaptive tensor-power strategy in the setting of quantum illumination, as was used in prior work on the topic. The sandwiched Rényi relative entropy is a key tool in our analysis. Finally, by combining our results with recent results of Hayashi and Tomamichel, we find a novel operational interpretation of the mutual information of a quantum channel {mathcal{N}} as the optimal Type II error exponent when discriminating between a large number of independent instances of {mathcal{N}} and an arbitrary "worst-case" replacer channel chosen from the set of all replacer channels.

  8. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  9. OUTCOMES AFTER LASER VERSUS COMBINED LASER AND BEVACIZUMAB TREATMENT FOR TYPE 1 RETINOPATHY OF PREMATURITY IN ZONE I.

    PubMed

    Yoon, Je Moon; Shin, Dong Hoon; Kim, Sang Jin; Ham, Don-Il; Kang, Se Woong; Chang, Yun Sil; Park, Won Soon

    2017-01-01

    To investigate the anatomical and refractive outcomes in patients with Type 1 retinopathy of prematurity in Zone I. The medical records of 101 eyes of 51 consecutive infants with Type 1 retinopathy of prematurity in Zone I were analyzed. Infants were treated by conventional laser photocoagulation (Group I), combined intravitreal bevacizumab injection and Zone I sparing laser (Group II), or intravitreal bevacizumab with deferred laser treatment (Group III). The proportion of unfavorable anatomical outcomes including retinal fold, disc dragging, retrolental tissue obscuring the view of the posterior pole, retinal detachment, and early refractive errors were compared among the three groups. The mean gestational age at birth and the birth weight of all 51 infants were 24.3 ± 1.1 weeks and 646 ± 143 g, respectively. In Group I, an unfavorable anatomical outcome was observed in 10 of 44 eyes (22.7%). In contrast, in Groups II and III, all eyes showed favorable anatomical outcomes without reactivation or retreatment. The refractive error was less myopic in Group III than in Groups I and II (spherical equivalent of -4.62 ± 4.00 D in Group I, -5.53 ± 2.21 D in Group II, and -1.40 ± 2.19 D in Group III; P < 0.001). In Type 1 retinopathy of prematurity in Zone I, intravitreal bevacizumab with concomitant or deferred laser therapy yielded a better anatomical outcome than conventional laser therapy alone. Moreover, intravitreal bevacizumab with deferred laser treatment resulted in less myopic refractive error.

  10. An Open-label Extension Study to Assess the Long-term Safety and Efficacy of ISIS 301012 (Mipomersen) in Patients With Familial Hypercholesterolemia or Severe-Hypercholesterolemia

    ClinicalTrials.gov

    2016-08-01

    Lipid Metabolism, Inborn Errors; Hypercholesterolemia, Autosomal Dominant; Hyperlipidemias; Metabolic Diseases; Hyperlipoproteinemia Type II; Metabolism, Inborn Errors; Genetic Diseases, Inborn; Infant, Newborn, Diseases; Metabolic Disorder; Congenital Abnormalities; Hypercholesterolemia; Hyperlipoproteinemias; Dyslipidemias; Lipid Metabolism Disorders

  11. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are consideredmore » for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.« less

  12. What does "Diversity" Mean for Public Engagement in Science? A New Metric for Innovation Ecosystem Diversity.

    PubMed

    Özdemir, Vural; Springer, Simon

    2018-03-01

    Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.

  13. Putative Panmixia in Restricted Populations of Trypanosoma cruzi Isolated from Wild Triatoma infestans in Bolivia

    PubMed Central

    Barnabe, Christian; Buitrago, Rosio; Bremond, Philippe; Aliaga, Claudia; Salas, Renata; Vidaurre, Pablo; Herrera, Claudia; Cerqueira, Frédérique; Bosseno, Marie-France; Waleckx, Etienne; Breniere, Simone Frédérique

    2013-01-01

    Trypanosoma cruzi, the causative agent of Chagas disease, is subdivided into six discrete typing units (DTUs; TcI–TcVI) of which TcI is ubiquitous and genetically highly variable. While clonality is the dominant mode of propagation, recombinant events play a significant evolutive role. Recently, foci of wild Triatoma infestans have been described in Bolivia, mainly infected by TcI. Hence, for the first time, we evaluated the level of genetic exchange within TcI natural potentially panmictic populations (single DTU, host, area and sampling time). Seventy-nine TcI stocks from wild T. infestans, belonging to six populations were characterized at eight microsatellite loci. For each population, Hardy-Weinberg equilibrium (HWE), linkage disequilibrium (LD), and presence of repeated multilocus genotypes (MLG) were analyzed by using a total of seven statistics, to test the null hypothesis of panmixia (H0). For three populations, none of the seven statistics allowed to rejecting H0; for another one the low size did not allow us to conclude, and for the two others the tests have given contradictory results. Interestingly, apparent panmixia was only observed in very restricted areas, and was not observed when grouping populations distant of only two kilometers or more. Nevertheless it is worth stressing that for the statistic tests of "HWE", in order to minimize the type I error (i. e. incorrect rejection of a true H0), we used the Bonferroni correction (BC) known to considerably increase the type II error ( i. e. failure to reject a false H0). For the other tests (LD and MLG), we did not use BC and the risk of type II error in these cases was acceptable. Thus, these results should be considered as a good indicator of the existence of panmixia in wild environment but this must be confirmed on larger samples to reduce the risk of type II error. PMID:24312410

  14. Low power and type II errors in recent ophthalmology research.

    PubMed

    Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P

    2016-10-01

    To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  15. The impact of sample non-normality on ANOVA and alternative methods.

    PubMed

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  16. A novel ETFB mutation in a patient with glutaric aciduria type II.

    PubMed

    Sudo, Yosuke; Sasaki, Ayako; Wakabayashi, Takashi; Numakura, Chikahiko; Hayasaka, Kiyoshi

    2015-01-01

    Glutaric aciduria type II (GAII) is a rare inborn error of metabolism clinically classified into a neonatal-onset form with congenital anomalies, a neonatal-onset form without congenital anomalies and a mild and/or late-onset form (MIM #231680). Here, we report on a GAII patient carrying a homozygous novel c.143_145delAGG (p.Glu48del) mutation in the ETFB gene, who presented with a neonatal-onset form with congenital anomalies and rapidly developed cardiomegaly after birth.

  17. A novel ETFB mutation in a patient with glutaric aciduria type II

    PubMed Central

    Sudo, Yosuke; Sasaki, Ayako; Wakabayashi, Takashi; Numakura, Chikahiko; Hayasaka, Kiyoshi

    2015-01-01

    Glutaric aciduria type II (GAII) is a rare inborn error of metabolism clinically classified into a neonatal-onset form with congenital anomalies, a neonatal-onset form without congenital anomalies and a mild and/or late-onset form (MIM #231680). Here, we report on a GAII patient carrying a homozygous novel c.143_145delAGG (p.Glu48del) mutation in the ETFB gene, who presented with a neonatal-onset form with congenital anomalies and rapidly developed cardiomegaly after birth. PMID:27081516

  18. Research on Spectroscopy, Opacity, and Atmospheres

    NASA Technical Reports Server (NTRS)

    Kurucz, Robert L.

    1996-01-01

    I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration.

  19. Investigating Perceptual Biases, Data Reliability, and Data Discovery in a Methodology for Collecting Speech Errors From Audio Recordings.

    PubMed

    Alderete, John; Davies, Monica

    2018-04-01

    This work describes a methodology of collecting speech errors from audio recordings and investigates how some of its assumptions affect data quality and composition. Speech errors of all types (sound, lexical, syntactic, etc.) were collected by eight data collectors from audio recordings of unscripted English speech. Analysis of these errors showed that: (i) different listeners find different errors in the same audio recordings, but (ii) the frequencies of error patterns are similar across listeners; (iii) errors collected "online" using on the spot observational techniques are more likely to be affected by perceptual biases than "offline" errors collected from audio recordings; and (iv) datasets built from audio recordings can be explored and extended in a number of ways that traditional corpus studies cannot be.

  20. Price and cost estimation

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.

    1979-01-01

    Price and Cost Estimating Program (PACE II) was developed to prepare man-hour and material cost estimates. Versatile and flexible tool significantly reduces computation time and errors and reduces typing and reproduction time involved in preparation of cost estimates.

  1. Social contact patterns can buffer costs of forgetting in the evolution of cooperation.

    PubMed

    Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten

    2018-06-13

    Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).

  2. When is a failure to replicate not a type II error?

    PubMed

    Vasconcelos, Marco; Urcuioli, Peter J; Lionello-DeNolf, Karen M

    2007-05-01

    Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our conclusion is warranted because (a) the original effect has not been replicated despite multiple attempts to do so and (b) the statement that more extended overtraining may be needed itself suggests that the original effect is not reliable.

  3. When Is a Failure to Replicate Not a Type II Error?

    PubMed Central

    Vasconcelos, Marco; Urcuioli, Peter J; Lionello-DeNolf, Karen M

    2007-01-01

    Zentall and Singer (2007) challenge our conclusion that the work-ethic effect reported by Clement, Feltus, Kaiser, and Zentall (2000) may have been a Type I error by arguing that (a) the effect has been extensively replicated and (b) the amount of overtraining our pigeons received may not have been sufficient to produce it. We believe that our conclusion is warranted because (a) the original effect has not been replicated despite multiple attempts to do so and (b) the statement that more extended overtraining may be needed itself suggests that the original effect is not reliable. PMID:17575905

  4. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  5. Why Does a Method That Fails Continue To Be Used: The Answer

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340

  6. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  7. Performance of Modified Test Statistics in Covariance and Correlation Structure Analysis under Conditions of Multivariate Nonnormality.

    ERIC Educational Resources Information Center

    Fouladi, Rachel T.

    2000-01-01

    Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…

  8. Quantum error-correcting code for ternary logic

    NASA Astrophysics Data System (ADS)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  9. Judgment of line orientation depends on gender, education, and type of error.

    PubMed

    Caparelli-Dáquer, Egas M; Oliveira-Souza, Ricardo; Moreira Filho, Pedro F

    2009-02-01

    Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of incorrect responses ("commission errors", or CE). We studied 343 volunteers of roughly comparable ages and with different levels of education. Education and gender were significantly associated with H scores, which were higher in men and in the groups with higher education. In contrast, the differences between men and women on CE depended on education. We concluded that (I) the ability to find the correct responses differs from the ability to avoid the wrong responses amidst an array of possible alternatives, and that (II) education interacts with gender to promote a stable performance on CE earlier in men than in women.

  10. 49 CFR 193.2509 - Emergency procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... plant; (ii) Potential hazards at the plant, including fires; (iii) Communication and emergency control... plant due to operating malfunctions, structural collapse, personnel error, forces of nature, and activities adjacent to the plant. (b) To adequately handle each type of emergency identified under paragraph...

  11. Optimal Sampling to Provide User-Specific Climate Information.

    NASA Astrophysics Data System (ADS)

    Panturat, Suwanna

    The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.

  12. Maximizing return on socioeconomic investment in phase II proof-of-concept trials.

    PubMed

    Chen, Cong; Beckman, Robert A

    2014-04-01

    Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.

  13. Glutaric acidemia type II: gene structure and mutations of the electron transfer flavoprotein:ubiquinone oxidoreductase (ETF:QO) gene.

    PubMed

    Goodman, Stephen I; Binard, Robert J; Woontner, Michael R; Frerman, Frank E

    2002-01-01

    Glutaric acidemia type II is a human inborn error of metabolism which can be due to defects in either subunit of electron transfer flavoprotein (ETF) or in ETF:ubiquinone oxidoreductase (ETF:QO), but few disease-causing mutations have been described. The ETF:QO gene is located on 4q33, and contains 13 exons. Primers to amplify these exons are presented, together with mutations identified by molecular analysis of 20 ETF:QO-deficient patients. Twenty-one different disease-causing mutations were identified on 36 of the 40 chromosomes.

  14. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  15. An infant with glutaric aciduria type IIc diagnosed with a novel mutation.

    PubMed

    Işıkay, Sedat; Yaman, Ayhan; Ceylaner, Serdar

    2017-01-01

    Işıkay S, Yaman A, Ceylaner S. An infant with glutaric aciduria type IIc diagnosed with a novel mutation. Turk J Pediatr 2017; 59: 315-317. Glutaric aciduria type II is a rare inborn error of metabolism. The clinical picture is highly variable with symptoms ranging from acute metabolic decompensations to chronic, mainly muscular problems or even asymptomatic cases. Herein we described a 7-month-old female patient presented with respiratory failure and diagnosed with glutaric aciduria type II via whole exome sequencing that exhibited one known and a novel mutation. Her blood and urine analyses were all normal. After the diagnosis, dramatic and sustained improvement on a low-fat, low-protein, and high-carbohydrate diet supplemented with oral riboflavin and carnitine was determined. In especially hypotonic patients with unknown etiologies, though the blood and urine analyses are normal, glutaric aciduria type II should also be kept in mind and genetic tests may be required for the diagnosis.

  16. Errorless Establishment of a Match-to-Sample Form Discrimination in Preschool Children. I. A Modification of Animal Laboratory Procedures for Children, II. A Comparison of Errorless and Trial-and-Error Discrimination. Progress Report.

    ERIC Educational Resources Information Center

    LeBlanc, Judith M.

    A sequence of studies compared two types of discrimination formation: errorless learning and trial-and-error procedures. The subjects were three boys and five girls from a university preschool. The children performed the experimental tasks at a typical match-to-sample apparatus with one sample window above and four match (response) windows below.…

  17. A collaborative vendor-buyer production-inventory systems with imperfect quality items, inspection errors, and stochastic demand under budget capacity constraint: a Karush-Kuhn-Tucker conditions approach

    NASA Astrophysics Data System (ADS)

    Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.

    2017-01-01

    In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.

  18. A Car Transportation System in Cooperation by Multiple Mobile Robots for Each Wheel: iCART II

    NASA Astrophysics Data System (ADS)

    Kashiwazaki, Koshi; Yonezawa, Naoaki; Kosuge, Kazuhiro; Sugahara, Yusuke; Hirata, Yasuhisa; Endo, Mitsuru; Kanbayashi, Takashi; Shinozuka, Hiroyuki; Suzuki, Koki; Ono, Yuki

    The authors proposed a car transportation system, iCART (intelligent Cooperative Autonomous Robot Transporters), for automation of mechanical parking systems by two mobile robots. However, it was difficult to downsize the mobile robot because the length of it requires at least the wheelbase of a car. This paper proposes a new car transportation system, iCART II (iCART - type II), based on “a-robot-for-a-wheel” concept. A prototype system, MRWheel (a Mobile Robot for a Wheel), is designed and downsized less than half the conventional robot. First, a method for lifting up a wheel by MRWheel is described. In general, it is very difficult for mobile robots such as MRWheel to move to desired positions without motion errors caused by slipping, etc. Therefore, we propose a follower's motion error estimation algorithm based on the internal force applied to each follower by extending a conventional leader-follower type decentralized control algorithm for cooperative object transportation. The proposed algorithm enables followers to estimate their motion errors and enables the robots to transport a car to a desired position. In addition, we analyze and prove the stability and convergence of the resultant system with the proposed algorithm. In order to extract only the internal force from the force applied to each robot, we also propose a model-based external force compensation method. Finally, proposed methods are applied to the car transportation system, the experimental results confirm their validity.

  19. The High Cost of Complexity in Experimental Design and Data Analysis: Type I and Type II Error Rates in Multiway ANOVA.

    ERIC Educational Resources Information Center

    Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.

    2002-01-01

    Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…

  20. An Argument Framework for the Application of Null Hypothesis Statistical Testing in Support of Research

    ERIC Educational Resources Information Center

    LeMire, Steven D.

    2010-01-01

    This paper proposes an argument framework for the teaching of null hypothesis statistical testing and its application in support of research. Elements of the Toulmin (1958) model of argument are used to illustrate the use of p values and Type I and Type II error rates in support of claims about statistical parameters and subject matter research…

  1. Evaluating causes of error in landmark-based data collection using scanners

    PubMed Central

    Shearer, Brian M.; Cooke, Siobhán B.; Halenar, Lauren B.; Reber, Samantha L.; Plummer, Jeannette E.; Delson, Eric

    2017-01-01

    In this study, we assess the precision, accuracy, and repeatability of craniodental landmarks (Types I, II, and III, plus curves of semilandmarks) on a single macaque cranium digitally reconstructed with three different surface scanners and a microCT scanner. Nine researchers with varying degrees of osteological and geometric morphometric knowledge landmarked ten iterations of each scan (40 total) to test the effects of scan quality, researcher experience, and landmark type on levels of intra- and interobserver error. Two researchers additionally landmarked ten specimens from seven different macaque species using the same landmark protocol to test the effects of the previously listed variables relative to species-level morphological differences (i.e., observer variance versus real biological variance). Error rates within and among researchers by scan type were calculated to determine whether or not data collected by different individuals or on different digitally rendered crania are consistent enough to be used in a single dataset. Results indicate that scan type does not impact rate of intra- or interobserver error. Interobserver error is far greater than intraobserver error among all individuals, and is similar in variance to that found among different macaque species. Additionally, experience with osteology and morphometrics both positively contribute to precision in multiple landmarking sessions, even where less experienced researchers have been trained in point acquisition. Individual training increases precision (although not necessarily accuracy), and is highly recommended in any situation where multiple researchers will be collecting data for a single project. PMID:29099867

  2. Special tinted contact lens on colour-defects.

    PubMed

    Mutilab, H A; Sharanjeet-Kaur; Keu, L K; Choo, P F

    2012-01-01

    The objective of this study was to determine the visual function of colour-deficient subjects when wearing special red tint contact lenses. A total of 17 subjects with congenital colour vision deficiency (14 deutans and 3 protans), voluntarily participated in this study. The average age for the subjects was 23.00 ± 4.06 years old. Visual functions tested were visual acuity (LogMAR), contrast sensitivity (FACT Chart) and stereopsis (TNO and Howard Dolman tests). Two types of special red tint lenses were used in this study; Type I (light red) and Type II (dark red). The protans and deutans showed no significant changes in visual acuity and contrast sensitivity when wearing either type of contact lens. Stereopsis testing using the Horward-Dolman test gave no significant changes but significant differences were seen using the TNO test. Stereopsis using the TNO test was significantly poorer with the red tinted contact lenses compared to without for both protons and deutans. Testing binocularly with Ishihara plates showed that 88% (n=15) of patients passed the test with Type I and Type II contact lenses. When D15 test was done, 3 patients (17.6%) were 'normal' when using the Type I contact lenses and 2 patients (11.8%) were 'normal' when using the Type II contact lenses. However, with FM100Hue test, most patients showed deutan responses. Total error scores (TES) were found to be higher with Type I and Type II contact lenses compared to without. The Type I and II special tinted contact lens used in this study did not cause a reduction of visual acuity and contrast sensitivity for the colour defects. Stereopsis was also not reduced with the Type I and Type II contact lenses for the colour defects except when tested with the TNO test. Colour vision defects became difficult to detect using the Ishihara plates but FM100Hue test did not show any improvement with the Type I and Type II contact lenses.

  3. Pollen flow in the wildservice tree, Sorbus torminalis (L.) Crantz. I. Evaluating the paternity analysis procedure in continuous populations.

    PubMed

    Oddou-Muratorio, S; Houot, M-L; Demesure-Musch, B; Austerlitz, F

    2003-12-01

    The joint development of polymorphic molecular markers and paternity analysis methods provides new approaches to investigate ongoing patterns of pollen flow in natural plant populations. However, paternity studies are hindered by false paternity assignment and the nondetection of true fathers. To gauge the risk of these two types of errors, we performed a simulation study to investigate the impact on paternity analysis of: (i) the assumed values for the size of the breeding male population (NBMP), and (ii) the rate of scoring error in genotype assessment. Our simulations were based on microsatellite data obtained from a natural population of the entomophilous wild service tree, Sorbus torminalis (L.) Crantz. We show that an accurate estimate of NBMP is required to minimize both types of errors, and we assess the reliability of a technique used to estimate NBMP based on parent-offspring genetic data. We then show that scoring errors in genotype assessment only slightly affect the assessment of paternity relationships, and conclude that it is generally better to neglect the scoring error rate in paternity analyses within a nonisolated population.

  4. 76 FR 39757 - Filing Procedures

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-06

    ... an optical character recognition process, such a document may contain recognition errors. CAUTION... network speed e-filing of these documents may be difficult. Pursuant to section II(C) above, the Secretary... optical scan format or a typed ``electronic signature,'' e.g., ``/s/Jane Doe.'' (3) In the case of a...

  5. Crowned spur gears - Methods for generation and Tooth Contact Analysis. II - Generation of the pinion tooth surface by a surface of revolution

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Handschuh, R. F.; Zhang, J.

    1988-01-01

    A method for generation of crowned pinion tooth surfaces using a surface of revolution is developed. The crowned pinion meshes with a regular involute gear and has a prescribed parabolic type of transmission errors when the gears operate in the aligned mode. When the gears are misaligned the transmission error remains parabolic with the maximum level still remaining very small (less than 0.34 arc second for the numerical examples). Tooth Contact Analysis (TCA) is used to simulate the conditions of meshing, determine the transmission error, and the bearing contact.

  6. Determining the sample size for co-dominant molecular marker-assisted linkage detection for a monogenic qualitative trait by controlling the type-I and type-II errors in a segregating F2 population.

    PubMed

    Hühn, M; Piepho, H P

    2003-03-01

    Tests for linkage are usually performed using the lod score method. A critical question in linkage analyses is the choice of sample size. The appropriate sample size depends on the desired type-I error and power of the test. This paper investigates the exact type-I error and power of the lod score method in a segregating F(2) population with co-dominant markers and a qualitative monogenic dominant-recessive trait. For illustration, a disease-resistance trait is considered, where the susceptible allele is recessive. A procedure is suggested for finding the appropriate sample size. It is shown that recessive plants have about twice the information content of dominant plants, so the former should be preferred for linkage detection. In some cases the exact alpha-values for a given nominal alpha may be rather small due to the discrete nature of the sampling distribution in small samples. We show that a gain in power is possible by using exact methods.

  7. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  8. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  9. A robust algorithm for automated target recognition using precomputed radar cross sections

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2004-09-01

    Passive radar is an emerging technology that offers a number of unique benefits, including covert operation. Many such systems are already capable of detecting and tracking aircraft. The goal of this work is to develop a robust algorithm for adding automated target recognition (ATR) capabilities to existing passive radar systems. In previous papers, we proposed conducting ATR by comparing the precomputed RCS of known targets to that of detected targets. To make the precomputed RCS as accurate as possible, a coordinated flight model is used to estimate aircraft orientation. Once the aircraft's position and orientation are known, it is possible to determine the incident and observed angles on the aircraft, relative to the transmitter and receiver. This makes it possible to extract the appropriate radar cross section (RCS) from our simulated database. This RCS is then scaled to account for propagation losses and the receiver's antenna gain. A Rician likelihood model compares these expected signals from different targets to the received target profile. We have previously employed Monte Carlo runs to gauge the probability of error in the ATR algorithm; however, generation of a statistically significant set of Monte Carlo runs is computationally intensive. As an alternative to Monte Carlo runs, we derive the relative entropy (also known as Kullback-Liebler distance) between two Rician distributions. Since the probability of Type II error in our hypothesis testing problem can be expressed as a function of the relative entropy via Stein's Lemma, this provides us with a computationally efficient method for determining an upper bound on our algorithm's performance. It also provides great insight into the types of classification errors we can expect from our algorithm. This paper compares the numerically approximated probability of Type II error with the results obtained from a set of Monte Carlo runs.

  10. Modelling Drug Administration Regimes for Asthma: A Romanian Experience

    ERIC Educational Resources Information Center

    Andras, Szilard; Szilagyi, Judit

    2010-01-01

    In this article, we present a modelling activity, which was a part of the project DQME II (Developing Quality in Mathematics Education, for more details see http://www.dqime.uni-dortmund.de) and some general observations regarding the maladjustments and rational errors arising in such type of activities.

  11. Race Differences and Type II Errors: A Comment on Borkowski and Krause.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    1985-01-01

    Borkowski and Krause (1983) concluded that the locus of black-white intelligence differences lies in metaprocesses not elementary cognitive processes. However, some variables were difference scores with unacceptably low reliability. Magnitude comparisons of racial differences give a different picture of results; comparable differences in measures…

  12. Sample-size needs for forestry herbicide trials

    Treesearch

    S.M. Zedaker; T.G. Gregoire; James H. Miller

    1994-01-01

    Forest herbicide experiments are increasingly being designed to evaluate smaller treatment differences when comparing existing effective treatments, tank mix ratios, surfactants, and new low-rate products. The ability to detect small differences in efficacy is dependent upon the relationship among sample size. type I and II error probabilities, and the coefficients of...

  13. Sensitivity and accuracy of high-throughput metabarcoding methods used to describe aquatic communities for early detection of invasve fish species

    EPA Science Inventory

    For early detection biomonitoring of aquatic invasive species, sensitivity to rare individuals and accurate, high-resolution taxonomic classification are critical to minimize Type I and II detection errors. Given the great expense and effort associated with morphological identifi...

  14. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  15. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  16. Elemental analysis of glass by laser ablation inductively coupled plasma optical emission spectrometry (LA-ICP-OES).

    PubMed

    Schenk, Emily R; Almirall, José R

    2012-04-10

    The elemental analysis of glass evidence has been established as a powerful discrimination tool for forensic analysts. Laser ablation inductively coupled plasma optical emission spectrometry (LA-ICP-OES) has been compared to laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and energy dispersive micro X-ray fluorescence spectroscopy (μXRF/EDS) as competing instrumentation for the elemental analysis of glass. The development of a method for the forensic analysis of glass coupling laser ablation to ICP-OES is presented for the first time. LA-ICP-OES has demonstrated comparable analytical performance to LA-ICP-MS based on the use of the element menu, Al (Al I 396.15 nm), Ba (Ba II 455.40 nm), Ca (Ca II 315.88 nm), Fe (Fe II 238.20 nm), Li (Li I 670.78 nm), Mg (Mg I 285.21 nm), Sr (Sr II 407.77 nm), Ti (Ti II 368.51 nm), and Zr (Zr II 343.82 nm). The relevant figures of merit, such as precision, accuracy and sensitivity, are presented and compared to LA-ICP-MS. A set of 41 glass samples was used to assess the discrimination power of the LA-ICP-OES method in comparison to other elemental analysis techniques. This sample set consisted of several vehicle glass samples that originated from the same source (inside and outside windshield panes) and several glass samples that originated from different vehicles. Different match criteria were used and compared to determine the potential for Type I and Type II errors. It was determined that broader match criteria is more applicable to the forensic comparison of glass analysis because it can reduce the affect that micro-heterogeneity inherent in the glass fragments and a less than ideal sampling strategy can have on the interpretation of the results. Based on the test set reported here, a plus or minus four standard deviation (± 4s) match criterion yielded the lowest possibility of Type I and Type II errors. The developed LA-ICP-OES method has been shown to perform similarly to LA-ICP-MS in the discrimination among different sources of glass while offering the advantages of a lower cost of acquisition and operation of analytical instrumentation making ICP-OES a possible alternative elemental analysis method for the forensic laboratory. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bahrdt, J.; Frentrup, W.; Gaupp, A.

    BESSY plans to build a SASE-FEL facility for the energy range from 20 eV to 1000 eV. The energy range will be covered by three APPLE II type undulators with a magnetic length of about 60 m each. This paper summarizes the basic parameters of the FEL-undulators. The magnetic design will be presented. A modified APPLE II design will be discussed which provides higher fields at the expense of reduced horizontal access. GENESIS simulations give an estimate on the tolerances for the beam wander and for gap errors.

  18. An Evaluation of Northern Hemisphere Merged Cloud Analyses from the United States Air Force Cloud Depiction Forecasting System II

    DTIC Science & Technology

    2013-03-01

    layering and typing to provide a vertical stratification of the cloud-filled pixels detected in Level 2. Level 3 output is remapped to the standard AFWA...analyses are compared to one another to see if the most recent analysis also has the lowest estimated error. Optimum interpolation (OI) occurs when...NORTHERN HEMISPHERE MERGED CLOUD ANALYSES FROM THE UNITED STATES AIR FORCE CLOUD DEPICTION FORECASTING SYSTEM II by Chandra M. Pasillas March

  19. Determining Type I and Type II Errors when Applying Information Theoretic Change Detection Metrics for Data Association and Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.

    Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.

  20. Bulk Fermi Surfaces of the Dirac Type-II Semimetallic Candidates M Al3 (Where M =V , Nb, and Ta)

    NASA Astrophysics Data System (ADS)

    Chen, K.-W.; Lian, X.; Lai, Y.; Aryal, N.; Chiu, Y.-C.; Lan, W.; Graf, D.; Manousakis, E.; Baumbach, R. E.; Balicas, L.

    2018-05-01

    We report a de Haas-van Alphen (dHvA) effect study on the Dirac type-II semimetallic candidates M Al3 (where, M =V , Nb and Ta). The angular dependence of their Fermi surface (FS) cross-sectional areas reveals a remarkably good agreement with our first-principles calculations. Therefore, dHvA supports the existence of tilted Dirac cones with Dirac type-II nodes located at 100, 230 and 250 meV above the Fermi level ɛF for VAl3 , NbAl3 and TaAl3 respectively, in agreement with the prediction of broken Lorentz invariance in these compounds. However, for all three compounds we find that the cyclotron orbits on their FSs, including an orbit nearly enclosing the Dirac type-II node, yield trivial Berry phases. We explain this via an analysis of the Berry phase where the position of this orbit, relative to the Dirac node, is adjusted within the error implied by the small disagreement between our calculations and the experiments. We suggest that a very small amount of doping could displace ɛF to produce topologically nontrivial orbits encircling their Dirac node(s).

  1. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  2. Five-Point Likert Items: t Test versus Mann-Whitney-Wilcoxon

    ERIC Educational Resources Information Center

    de Winter, Joost C. F.; Dodou, Dimitra

    2010-01-01

    Likert questionnaires are widely used in survey research, but it is unclear whether the item data should be investigated by means of parametric or nonparametric procedures. This study compared the Type I and II error rates of the "t" test versus the Mann-Whitney-Wilcoxon (MWW) for five-point Likert items. Fourteen population…

  3. Research Quality: Critique of Quantitative Articles in the "Journal of Counseling & Development"

    ERIC Educational Resources Information Center

    Wester, Kelly L.; Borders, L. DiAnne; Boul, Steven; Horton, Evette

    2013-01-01

    The purpose of this study was to examine the quality of quantitative articles published in the "Journal of Counseling & Development." Quality concerns arose in regard to omissions of psychometric information of instruments, effect sizes, and statistical power. Type VI and II errors were found. Strengths included stated research…

  4. 45 CFR 286.205 - How will we determine if a Tribe fails to meet the minimum work participation rate(s)?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., financial records, and automated data systems; (ii) The data are free from computational errors and are... records, financial records, and automated data systems; (ii) The data are free from computational errors... records, and automated data systems; (ii) The data are free from computational errors and are internally...

  5. Image defects from surface and alignment errors in grazing incidence telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  6. Theoretical study of the accuracy of the elution by characteristic points method for bi-langmuir isotherms.

    PubMed

    Ravald, L; Fornstedt, T

    2001-01-26

    The bi-Langmuir equation has recently been proven essential to describe chiral chromatographic surfaces and we therefore investigated the accuracy of the elution by characteristic points method (ECP) for estimation of bi-Langmuir isotherm parameters. The ECP calculations was done on elution profiles generated by the equilibrium-dispersive model of chromatography for five different sets of bi-Langmuir parameters. The ECP method generates two different errors; (i) the error of the ECP calculated isotherm and (ii) the model error of the fitting to the ECP isotherm. Both errors decreased with increasing column efficiency. Moreover, the model error was strongly affected by the weight of the bi-Langmuir function fitted. For some bi-Langmuir compositions the error of the ECP calculated isotherm is too large even at high column efficiencies. Guidelines will be given on surface types to be avoided and on column efficiencies and loading factors required for adequate parameter estimations with ECP.

  7. Medicine and aviation: a review of the comparison.

    PubMed

    Randell, R

    2003-01-01

    This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.

  8. Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.

    PubMed

    Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P

    2014-01-01

    Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.

  9. Predicting forest insect flight activity: A Bayesian network approach

    PubMed Central

    Pawson, Stephen M.; Marcot, Bruce G.; Woodberry, Owen G.

    2017-01-01

    Daily flight activity patterns of forest insects are influenced by temporal and meteorological conditions. Temperature and time of day are frequently cited as key drivers of activity; however, complex interactions between multiple contributing factors have also been proposed. Here, we report individual Bayesian network models to assess the probability of flight activity of three exotic insects, Hylurgus ligniperda, Hylastes ater, and Arhopalus ferus in a managed plantation forest context. Models were built from 7,144 individual hours of insect sampling, temperature, wind speed, relative humidity, photon flux density, and temporal data. Discretized meteorological and temporal variables were used to build naïve Bayes tree augmented networks. Calibration results suggested that the H. ater and A. ferus Bayesian network models had the best fit for low Type I and overall errors, and H. ligniperda had the best fit for low Type II errors. Maximum hourly temperature and time since sunrise had the largest influence on H. ligniperda flight activity predictions, whereas time of day and year had the greatest influence on H. ater and A. ferus activity. Type II model errors for the prediction of no flight activity is improved by increasing the model’s predictive threshold. Improvements in model performance can be made by further sampling, increasing the sensitivity of the flight intercept traps, and replicating sampling in other regions. Predicting insect flight informs an assessment of the potential phytosanitary risks of wood exports. Quantifying this risk allows mitigation treatments to be targeted to prevent the spread of invasive species via international trade pathways. PMID:28953904

  10. Quantum cryptography with perfect multiphoton entanglement.

    PubMed

    Luo, Yuhui; Chan, Kam Tai

    2005-05-01

    Multiphoton entanglement in the same polarization has been shown theoretically to be obtainable by type-I spontaneous parametric downconversion (SPDC), which can generate bright pulses more easily than type-II SPDC. A new quantum cryptographic protocol utilizing polarization pairs with the detected type-I entangled multiphotons is proposed as quantum key distribution. We calculate the information capacity versus photon number corresponding to polarization after considering the transmission loss inside the optical fiber, the detector efficiency, and intercept-resend attacks at the level of channel error. The result compares favorably with all other schemes employing entanglement.

  11. A Prospective Multicenter Evaluation of the Accuracy of a Novel Implanted Continuous Glucose Sensor: PRECISE II.

    PubMed

    Christiansen, Mark P; Klaff, Leslie J; Brazg, Ronald; Chang, Anna R; Levy, Carol J; Lam, David; Denham, Douglas S; Atiee, George; Bode, Bruce W; Walters, Steven J; Kelley, Lynne; Bailey, Timothy S

    2018-03-01

    Persistent use of real-time continuous glucose monitoring (CGM) improves diabetes control in individuals with type 1 diabetes (T1D) and type 2 diabetes (T2D). PRECISE II was a nonrandomized, blinded, prospective, single-arm, multicenter study that evaluated the accuracy and safety of the implantable Eversense CGM system among adult participants with T1D and T2D (NCT02647905). The primary endpoint was the mean absolute relative difference (MARD) between paired Eversense and Yellow Springs Instrument (YSI) reference measurements through 90 days postinsertion for reference glucose values from 40 to 400 mg/dL. Additional endpoints included Clarke Error Grid analysis and sensor longevity. The primary safety endpoint was the incidence of device-related or sensor insertion/removal procedure-related serious adverse events (SAEs) through 90 days postinsertion. Ninety participants received the CGM system. The overall MARD value against reference glucose values was 8.8% (95% confidence interval: 8.1%-9.3%), which was significantly lower than the prespecified 20% performance goal for accuracy (P < 0.0001). Ninety-three percent of CGM values were within 20/20% of reference values over the total glucose range of 40-400 mg/dL. Clarke Error Grid analysis showed 99.3% of samples in the clinically acceptable error zones A (92.8%) and B (6.5%). Ninety-one percent of sensors were functional through day 90. One related SAE (1.1%) occurred during the study for removal of a sensor. The PRECISE II trial demonstrated that the Eversense CGM system provided accurate glucose readings through the intended 90-day sensor life with a favorable safety profile.

  12. VizieR Online Data Catalog: Empirical calibration of the near-IR Ca triplet (Cenarro+ 2001)

    NASA Astrophysics Data System (ADS)

    Cenarro; A. J.; Cardiel; N.; Gorgas; J.; Peletier; R. F.; Vazdekis; A.; Prada; F.

    2001-09-01

    File table contains details of the new near-IR stellar library observed to calibrate the Ca II triplet. It includes the indices CaT*, CaT and PaT measured over the final spectra as well as their corresponding errors. The Henry Draper Catalogue number, other names (mainly HR and BD numbers), coordinates, spectral type, luminosity class, apparent magnitude and atmospheric parameters (as derived in Paper II; Cenarro et al., 2001MNRAS.326..981C) are also given. (1 data file).

  13. Object motion computation for the initiation of smooth pursuit eye movements in humans.

    PubMed

    Wallace, Julian M; Stone, Leland S; Masson, Guillaume S

    2005-04-01

    Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.

  14. Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2012-09-01

    It is well known that when data are nonnormally distributed, a test of the significance of Pearson's r may inflate Type I error rates and reduce power. Statistics textbooks and the simulation literature provide several alternatives to Pearson's correlation. However, the relative performance of these alternatives has been unclear. Two simulation studies were conducted to compare 12 methods, including Pearson, Spearman's rank-order, transformation, and resampling approaches. With most sample sizes (n ≥ 20), Type I and Type II error rates were minimized by transforming the data to a normal shape prior to assessing the Pearson correlation. Among transformation approaches, a general purpose rank-based inverse normal transformation (i.e., transformation to rankit scores) was most beneficial. However, when samples were both small (n ≤ 10) and extremely nonnormal, the permutation test often outperformed other alternatives, including various bootstrap tests.

  15. Oral findings in patients with mucolipidosis type III.

    PubMed

    Cavalcante, Weber Céo; Santos, Luciano Cincurá Silva; Dos Santos, Josiane Nascimento; de Vasconcellos, Sara Juliana de Abreu; de Azevedo, Roberto Almeida; Dos Santos, Jean Nunes

    2012-01-01

    Mucolipidosis type III is a rare, autosomal recessive disorder, which is part of a group of storage diseases as a result of inborn error of lysosomal enzyme metabolism. It is characterized by the gradual onset of signs and symptoms affecting the physical and mental development as well as visual changes, heart, skeletal and joint. Although oral findings associated with mucolipidosis type II have been extensively reported, there is a shortage of information on mucolipidosis type III. This paper presents radiological and histological findings of multiple radiolucent lesions associated with impacted teeth in the jaw of a 16 year-old youngster with mucolipidosis type III.

  16. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  17. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    PubMed

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.

  18. [Can the scattering of differences from the target refraction be avoided?].

    PubMed

    Janknecht, P

    2008-10-01

    We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.

  19. Outcomes after Intravitreal Bevacizumab versus Laser Photocoagulation for Retinopathy of Prematurity: A 5-Year Retrospective Analysis

    PubMed Central

    Hwang, Christopher K.; Hubbard, G. Baker; Hutchinson, Amy K.; Lambert, Scott R.

    2014-01-01

    Purpose To determine the relative effectiveness, major complications, and refractive errors associated with intravitreal bevacizumab (IVB) versus panretinal photocoagulation (PRP) to treat Type 1 retinopathy of prematurity (ROP). Subjects Consecutive infants with Type 1 ROP who received either IVB or PRP between January 2008 and December 2012 and had at least six months of follow-up. Design Retrospective case series. Methods The data from infants treated with either IVB or PRP for Type 1 ROP between January 2008 and December 2012 were recorded from two medical centers in Atlanta, Georgia. Main Outcome Measures Recurrence rate, complication rate, refractive error. Results A total of 54 eyes (28 patients) with Type 1 ROP were evaluated: 22 eyes (11 patients) received IVB, and 32 eyes (17 patients) received PRP. Among the 22 eyes treated with IVB, 16 eyes had Zone I ROP and 6 eyes had posterior Zone II ROP. The number of Zone I and Zone II ROP eyes treated with PRP were 5 and 27 eyes, respectively. Mean gestational age, birth weight, postmenstrual age at the initial treatment, and follow-up period for the infants receiving IVB were 24.2 weeks, 668.1 grams, 35.1 weeks, and 21.7 weeks, respectively, and for the infants receiving PRP were 24.8, 701.4 grams, 36.1 weeks, and 34.5 weeks, respectively. ROP recurred in 3/22 (14%) IVB-treated eyes and in 1/32 (3%) PRP-treated eyes. None of IVB-treated eyes progressed to retinal detachment or developed macular ectopia. Only one eye went on to retinal detachment and five eyes developed macular ectopia in PRP-treated eyes. Mean spherical equivalent and postgestational age at the last refraction for IVB-treated eyes were −2.4 D and 22.4 months, respectively, and for PRP-treated eyes were −5.3 D and 37.1 months, respectively. Mean spherical equivalent for Zone I ROP eyes treated with IVB and PRP were −3.7 D and −10.1 D, respectively, and for Zone II ROP eyes were 0.6 D and −4.7 D, respectively. Conclusions Both IVB and PRP are effective treatment options for Type 1 ROP with low complication rates. Zone I ROP was associated with high minus refractive errors in eyes treated with either IVB or PRP. PMID:25687024

  20. Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.

    PubMed

    Cole, Steve W; Galic, Zoran; Zack, Jerome A

    2003-09-22

    Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus

  1. Gaussian Hypothesis Testing and Quantum Illumination.

    PubMed

    Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario

    2017-09-22

    Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.

  2. Merotelic kinetochore attachment in oocyte meiosis II causes sister chromatids segregation errors in aged mice.

    PubMed

    Cheng, Jin-Mei; Li, Jian; Tang, Ji-Xin; Hao, Xiao-Xia; Wang, Zhi-Peng; Sun, Tie-Cheng; Wang, Xiu-Xia; Zhang, Yan; Chen, Su-Ren; Liu, Yi-Xun

    2017-08-03

    Mammalian oocyte chromosomes undergo 2 meiotic divisions to generate haploid gametes. The frequency of chromosome segregation errors during meiosis I increase with age. However, little attention has been paid to the question of how aging affects sister chromatid segregation during oocyte meiosis II. More importantly, how aneuploid metaphase II (MII) oocytes from aged mice evade the spindle assembly checkpoint (SAC) mechanism to complete later meiosis II to form aneuploid embryos remains unknown. Here, we report that MII oocytes from naturally aged mice exhibited substantial errors in chromosome arrangement and configuration compared with young MII oocytes. Interestingly, these errors in aged oocytes had no impact on anaphase II onset and completion as well as 2-cell formation after parthenogenetic activation. Further study found that merotelic kinetochore attachment occurred more frequently and could stabilize the kinetochore-microtubule interaction to ensure SAC inactivation and anaphase II onset in aged MII oocytes. This orientation could persist largely during anaphase II in aged oocytes, leading to severe chromosome lagging and trailing as well as delay of anaphase II completion. Therefore, merotelic kinetochore attachment in oocyte meiosis II exacerbates age-related genetic instability and is a key source of age-dependent embryo aneuploidy and dysplasia.

  3. Detecting change in advance tree regeneration using forest inventory data: the implications of type II error

    Treesearch

    James A. Westfall; William H. McWilliams

    2012-01-01

    Achieving adequate and desirable forest regeneration is necessary for maintaining native tree species and forest composition. Advance tree seedling and sapling regeneration is the basis of the next stand and serves as an indicator of future composition. The Pennsylvania Regeneration Study was implemented statewide to monitor regeneration on a subset of Forest Inventory...

  4. Medication errors: a prospective cohort study of hand-written and computerised physician order entry in the intensive care unit.

    PubMed

    Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff

    2005-10-05

    The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.

  5. Optimal cost-effective designs of Phase II proof of concept trials and associated go-no go decisions.

    PubMed

    Chen, Cong; Beckman, Robert A

    2009-01-01

    This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.

  6. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  7. Comparison of Refractive Error Changes in Retinopathy of Prematurity Patients Treated with Diode and Red Lasers.

    PubMed

    Roohipoor, Ramak; Karkhaneh, Reza; Riazi Esfahani, Mohammad; Alipour, Fateme; Haghighat, Mahtab; Ebrahimiadib, Nazanin; Zarei, Mohammad; Mehrdad, Ramin

    2016-01-01

    To compare refractive error changes in retinopathy of prematurity (ROP) patients treated with diode and red lasers. A randomized double-masked clinical trial was performed, and infants with threshold or prethreshold type 1 ROP were assigned to red or diode laser groups. Gestational age, birth weight, pretreatment cycloplegic refraction, time of treatment, disease stage, zone and disease severity were recorded. Patients received either red or diode laser treatment and were regularly followed up for retina assessment and refraction. The information at month 12 of corrected age was considered for comparison. One hundred and fifty eyes of 75 infants were enrolled in the study. Seventy-four eyes received diode and 76 red laser therapy. The mean gestational age and birth weight of the infants were 28.6 ± 3.2 weeks and 1,441 ± 491 g, respectively. The mean baseline refractive error was +2.3 ± 1.7 dpt. Posttreatment refraction showed a significant myopic shift (mean 2.6 ± 2.0 dpt) with significant difference between the two groups (p < 0.001). There was a greater myopic shift among children with zone I and diode laser treatment (mean 6.00 dpt) and a lesser shift among children with zone II and red laser treatment (mean 1.12 dpt). The linear regression model, using the generalized estimating equation method, showed that the type of laser used has a significant effect on myopic shift even after adjustment for other variables. Myopic shift in laser-treated ROP patients is related to the type of laser used and the involved zone. Red laser seems to cause less myopic shift than diode laser, and those with zone I involvement have a greater myopic shift than those with ROP in zone II. © 2016 S. Karger AG, Basel.

  8. VizieR Online Data Catalog: JMMC Stellar Diameters Catalogue - JSDC. Version 2 (Bourges+, 2017)

    NASA Astrophysics Data System (ADS)

    Bourges, L.; Mella, G.; Lafrasse, S.; Duvert, G.; Chelli, A.; Le Bouquin, J.-B.; Delfosse, X.; Chesneau, O.

    2017-01-01

    The JMMC (Jean-Marie Mariotti Center) Calibrator Workgroup has long developed methods to estimate the angular diameter of stars, and provides this expertise in the SearchCal tool (http://www.jmmc.fr/searchcal). SearchCal creates a dynamical catalogue of stars suitable to calibrate Optical Long-Baseline Interferometry (OLBI) observations from on-line queries of CDS catalogues, according to observational parameters. In essence, SearchCal is limited only by the completeness of the stellar catalogues it uses, and in particular is not limited in magnitude. SearchCal being an application centered on the somewhat restricted OLBI observational purposes, it appeared useful to make our angular diameter estimates available for other purposes through a CDS-based catalog, the JMMC Stellar Diameters Catalogue (JSDC, II/300). This second version of the catalog represents a tenfold improvement both in terms of the number of objects and on the precision of the estimates. This is due to a new algorithm using reddening-free quantities -- the pseudomagnitudes, allied to a new database of all the measured stellar angular diameters -- the JMDC (II/345/jmdc), and a rigorous error propagation at all steps of the processing. All this is described in the associated publication by Chelli et al. (2016A&A...589A.112C). The catalog reports the Limb-Darkened Diameter (LDD) and error for 465877 stars, as well as their BVRIJHKLMN magnitudes, Uniform Disk Diameters (UDD) in these same photometric bands, Spectral Type, and two supplementary quality indicators: - the mean-diameter chi-square (see Appendix A.2 of Chelli et al., 2016A&A...589A.112C). - a flag indicating some degree of caution in choosing this star as an OLBI calibrator: known spectroscopic binaries, Algol type stars, etc, see Note (1). The conversion from LDD to UDD in each spectral band is made using mainly the coefficients from J/A+A/556/A86/table16 and J/A+A/554/A98/table16 when possible (compatible spectral types) and following the prescriptions of the JMMC report at http://www.mariotti.fr/doc/approved/JMMC-MEM-2610-0001.pdf in all other cases. The errors on UDD values are omitted as they are similar to the LDD error. Instead of using this catalog to find a suitable OLBI calibrator, the reader is invited to use the SearchCal tool at JMMC (http://www.jmmc.fr/searchcal) which permits a refined search, give access to other possible calibrators (faint stars not in the Tycho catalog) and benefits from the maintainance of JMMC and CDS databases. This catalog replaces the previous JSDC (II/300/jsdc). Almost all stars in II/300/jsdc are found in II/346 with a consistent diameter, with the exception of 1935 stars whose estimated diameter differs from more than 2 sigmas between the two catalogs. The associated file JSDCv2v1 dis.vot (jsdc dis.dat) summarizes this difference. (5 data files).

  9. Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianwei; Zhang, Yong

    2014-12-07

    We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less

  10. Evaluating methods of correcting for multiple comparisons implemented in SPM12 in social neuroscience fMRI studies: an example from moral psychology.

    PubMed

    Han, Hyemin; Glenn, Andrea L

    2018-06-01

    In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.

  11. An introduction to multiplicity issues in clinical trials: the what, why, when and how.

    PubMed

    Li, Guowei; Taljaard, Monica; Van den Heuvel, Edwin R; Levine, Mitchell Ah; Cook, Deborah J; Wells, George A; Devereaux, Philip J; Thabane, Lehana

    2017-04-01

    In clinical trials it is not uncommon to face a multiple testing problem which can have an impact on both type I and type II error rates, leading to inappropriate interpretation of trial results. Multiplicity issues may need to be considered at the design, analysis and interpretation stages of a trial. The proportion of trial reports not adequately correcting for multiple testing remains substantial. The purpose of this article is to provide an introduction to multiple testing issues in clinical trials, and to reduce confusion around the need for multiplicity adjustments. We use a tutorial, question-and-answer approach to address the key issues of why, when and how to consider multiplicity adjustments in trials. We summarize the relevant circumstances under which multiplicity adjustments ought to be considered, as well as options for carrying out multiplicity adjustments in terms of trial design factors including Population, Intervention/Comparison, Outcome, Time frame and Analysis (PICOTA). Results are presented in an easy-to-use table and flow diagrams. Confusion about multiplicity issues can be reduced or avoided by considering the potential impact of multiplicity on type I and II errors and, if necessary pre-specifying statistical approaches to either avoid or adjust for multiplicity in the trial protocol or analysis plan. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  12. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Evolution of Query Optimization Methods

    NASA Astrophysics Data System (ADS)

    Hameurlain, Abdelkader; Morvan, Franck

    Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).

  14. Omega-3 fatty acid supplementation decreases DNA damage in brain of rats subjected to a chemically induced chronic model of Tyrosinemia type II.

    PubMed

    Carvalho-Silva, Milena; Gomes, Lara M; Scaini, Giselli; Rebelo, Joyce; Damiani, Adriani P; Pereira, Maiara; Andrade, Vanessa M; Gava, Fernanda F; Valvassori, Samira S; Schuck, Patricia F; Ferreira, Gustavo C; Streck, Emilio L

    2017-08-01

    Tyrosinemia type II is an inborn error of metabolism caused by a mutation in a gene encoding the enzyme tyrosine aminotransferase leading to an accumulation of tyrosine in the body, and is associated with neurologic and development difficulties in numerous patients. Because the accumulation of tyrosine promotes oxidative stress and DNA damage, the main aim of this study was to investigate the possible antioxidant and neuroprotective effects of omega-3 treatment in a chemically-induced model of Tyrosinemia type II in hippocampus, striatum and cerebral cortex of rats. Our results showed chronic administration of L-tyrosine increased the frequency and the index of DNA damage, as well as the 8-hydroxy-2'-deoxyguanosine (8-OHdG) levels in the hippocampus, striatum and cerebral cortex. Moreover, omega-3 fatty acid treatment totally prevented increased DNA damage in the striatum and hippocampus, and partially prevented in the cerebral cortex, whereas the increase in 8-OHdG levels was totally prevented by omega-3 fatty acid treatment in hippocampus, striatum and cerebral cortex. In conclusion, the present study demonstrated that the main accumulating metabolite in Tyrosinemia type II induce DNA damage in hippocampus, striatum and cerebral cortex, possibly mediated by free radical production, and the supplementation with omega-3 fatty acids was able to prevent this damage, suggesting that could be involved in the prevention of oxidative damage to DNA in this disease. Thus, omega-3 fatty acids supplementation to Tyrosinemia type II patients may represent a new therapeutic approach and a possible adjuvant to the curren t treatment of this disease.

  15. Evaluation of Mars Entry Reconstructured Trajectories Based on Hypothetical 'Quick-Look' Entry Navigation Data

    NASA Technical Reports Server (NTRS)

    Pastor, P. Rick; Bishop, Robert H.; Striepe, Scott A.

    2000-01-01

    A first order simulation analysis of the navigation accuracy expected from various Navigation Quick-Look data sets is performed. Here quick-look navigation data are observations obtained by hypothetical telemetried data transmitted on the fly during a Mars probe's atmospheric entry. In this simulation study, navigation data consists of 3-axis accelerometer sensor and attitude information data. Three entry vehicle guidance types are studied: I. a Maneuvering entry vehicle (as with Mars 01 guidance where angle of attack and bank angle are controlled); II. Zero angle-of-attack controlled entry vehicle (as with Mars 98); and III. Ballistic, or spin stabilized entry vehicle (as with Mars Pathfinder);. For each type, sensitivity to progressively under sampled navigation data and inclusion of sensor errors are characterized. Attempts to mitigate the reconstructed trajectory errors, including smoothing, interpolation and changing integrator characteristics are also studied.

  16. Computer Simulations to Study Diffraction Effects of Stacking Faults in Beta-SiC: II. Experimental Verification. 2; Experimental Verification

    NASA Technical Reports Server (NTRS)

    Pujar, Vijay V.; Cawley, James D.; Levine, S. (Technical Monitor)

    2000-01-01

    Earlier results from computer simulation studies suggest a correlation between the spatial distribution of stacking errors in the Beta-SiC structure and features observed in X-ray diffraction patterns of the material. Reported here are experimental results obtained from two types of nominally Beta-SiC specimens, which yield distinct XRD data. These samples were analyzed using high resolution transmission electron microscopy (HRTEM) and the stacking error distribution was directly determined. The HRTEM results compare well to those deduced by matching the XRD data with simulated spectra, confirming the hypothesis that the XRD data is indicative not only of the presence and density of stacking errors, but also that it can yield information regarding their distribution. In addition, the stacking error population in both specimens is related to their synthesis conditions and it appears that it is similar to the relation developed by others to explain the formation of the corresponding polytypes.

  17. Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy

    PubMed Central

    2011-01-01

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685

  18. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    PubMed

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.

    PubMed

    Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N

    2011-04-15

    To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.

  20. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  1. Type II fuzzy systems for amyloid plaque segmentation in transgenic mouse brains for Alzheimer's disease quantification

    NASA Astrophysics Data System (ADS)

    Khademi, April; Hosseinzadeh, Danoush

    2014-03-01

    Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.

  2. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  3. Modelling space-based integral-field spectrographs and their application to Type Ia supernova cosmology

    NASA Astrophysics Data System (ADS)

    Shukla, Hemant; Bonissent, Alain

    2017-04-01

    We present the parameterized simulation of an integral-field unit (IFU) slicer spectrograph and its applications in spectroscopic studies, namely, for probing dark energy with type Ia supernovae. The simulation suite is called the fast-slicer IFU simulator (FISim). The data flow of FISim realistically models the optics of the IFU along with the propagation effects, including cosmological, zodiacal, instrumentation and detector effects. FISim simulates the spectrum extraction by computing the error matrix on the extracted spectrum. The applications for Type Ia supernova spectroscopy are used to establish the efficacy of the simulator in exploring the wider parametric space, in order to optimize the science and mission requirements. The input spectral models utilize the observables such as the optical depth and velocity of the Si II absorption feature in the supernova spectrum as the measured parameters for various studies. Using FISim, we introduce a mechanism for preserving the complete state of a system, called the partial p/partial f matrix, which allows for compression, reconstruction and spectrum extraction, we introduce a novel and efficient method for spectrum extraction, called super-optimal spectrum extraction, and we conduct various studies such as the optimal point spread function, optimal resolution, parameter estimation, etc. We demonstrate that for space-based telescopes, the optimal resolution lies in the region near R ˜ 117 for read noise of 1 e- and 7 e- using a 400 km s-1 error threshold on the Si II velocity.

  4. The NASA F-15 Intelligent Flight Control Systems: Generation II

    NASA Technical Reports Server (NTRS)

    Buschbacher, Mark; Bosworth, John

    2006-01-01

    The Second Generation (Gen II) control system for the F-15 Intelligent Flight Control System (IFCS) program implements direct adaptive neural networks to demonstrate robust tolerance to faults and failures. The direct adaptive tracking controller integrates learning neural networks (NNs) with a dynamic inversion control law. The term direct adaptive is used because the error between the reference model and the aircraft response is being compensated or directly adapted to minimize error without regard to knowing the cause of the error. No parameter estimation is needed for this direct adaptive control system. In the Gen II design, the feedback errors are regulated with a proportional-plus-integral (PI) compensator. This basic compensator is augmented with an online NN that changes the system gains via an error-based adaptation law to improve aircraft performance at all times, including normal flight, system failures, mispredicted behavior, or changes in behavior resulting from damage.

  5. Five-equation and robust three-equation methods for solution verification of large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dutta, Rabijit; Xing, Tao

    2018-02-01

    This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.

  6. Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network

    DTIC Science & Technology

    1989-08-01

    Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error

  7. Analysis of Sample Size, Counting Time, and Plot Size from an Avian Point Count Survey on Hoosier National Forest, Indiana

    Treesearch

    Frank R. Thompson; Monica J. Schwalbach

    1995-01-01

    We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...

  8. MENO-II: An AI-Based Programming Tutor.

    DTIC Science & Technology

    1983-08-01

    TUTORing component then attempts to infer the misconception that might underlie the bug and present the student with remedial instruction. We tested the BUG...of the student’s program. The TUTORing component then attempts to inter the misconception that might underlie the bug and present the student with... student errors, MENO-11 can cope with 18 different types of program bugs. These bugs are tied explicitly to a knowledge base of potential misconceptions

  9. Microwave Landing System. Phase II. Tracker Error Study.

    DTIC Science & Technology

    1974-12-01

    the runways and environs. The geographical locations of the four phototheodolite towers are indicated on Figure 1-1. A Contraves Model C phototheodolite...temperature 400 K above 500 elevation (dark sky) Side lobe location 1.720 (Ist) Type of scan Monopulse R-f transmission line Rectangular waveguide Line loss ...receiving 1.3 db Line loss transmitting 2.3 db System Facts Azimuth coverage 3600 Elevation coverage -10* to 190* (tracking -10* to 85*) Range accuracy

  10. On the use of unshielded cables in ionization chamber dosimetry for total-skin electron therapy.

    PubMed

    Chen, Z; Agostinelli, A; Nath, R

    1998-03-01

    The dosimetry of total-skin electron therapy (TSET) usually requires ionization chamber measurements in a large electron beam (up to 120 cm x 200 cm). Exposing the chamber's electric cable, its connector and part of the extension cable to the large electron beam will introduce unwanted electronic signals that may lead to inaccurate dosimetry results. While the best strategy to minimize the cable-induced electronic signal is to shield the cables and its connector from the primary electrons, as has been recommended by the AAPM Task Group Report 23 on TSET, cables without additional shielding are often used in TSET dosimetry measurements for logistic reasons, for example when an automatic scanning dosimetry is used. This paper systematically investigates the consequences and the acceptability of using an unshielded cable in ionization chamber dosimetry in a large TSET electron beam. In this paper, we separate cable-induced signals into two types. The type-I signal includes all charges induced which do not change sign upon switching the chamber polarity, and type II includes all those that do. The type-I signal is easily cancelled by the polarity averaging method. The type-II cable-induced signal is independent of the depth of the chamber in a phantom and its magnitude relative to the true signal determines the acceptability of a cable for use under unshielded conditions. Three different cables were evaluated in two different TSET beams in this investigation. For dosimetry near the depth of maximum buildup, the cable-induced dosimetry error was found to be less than 0.2% when the two-polarity averaging technique was applied. At greater depths, the relative dosimetry error was found to increase at a rate approximately equal to the inverse of the electron depth dose. Since the application of the two-polarity averaging technique requires a constant-irradiation condition, it was demonstrated than an additional error of up to 4% could be introduced if the unshielded cable's spatial configuration were altered during the two-polarity measurements. This suggests that automatic scanning systems with unshielded cables should not be used in TSET ionization chamber dosimetry. However, the data did show that an unshielded cable may be used in TSET ionization chamber dosimetry if the size of cable-induced error in a given TSET beam is pre-evaluated and the measurement is carefully conducted. When such an evaluation has not been performed, additional shielding should be applied to the cable being used, making measurements at multiple points difficult.

  11. ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.

    2010-08-10

    A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less

  12. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    PubMed Central

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  13. Vector space methods of photometric analysis. II - Refinement of the MK grid for B stars. III - The two components of ultraviolet reddening

    NASA Technical Reports Server (NTRS)

    Massa, D.

    1980-01-01

    This paper discusses systematic errors which arise from exclusive use of the MK system to determine reddening. It is found that implementation of uvby, beta photometry to refine the qualitative MK grid substantially reduces stellar mismatch error. A working definition of 'identical' ubvy, beta types is investigated and the relationship of uvby to B-V color excesses is determined. A comparison is also made of the hydrogen based uvby, beta types with the MK system based on He and metal lines. A small core correlated effective temperature luminosity error in the MK system for the early B stars is observed along with a breakdown of the MK luminosity criteria for the late B stars. The second part investigates the wavelength dependence of interstellar extinction in the ultraviolet wavelength range observed with the TD-1 satellite. In this study the sets of identical stars employed to find reddening are determined more precisely than in previous studies and consist only of normal, nonsupergiant stars. A multivariate analysis of variance techniques in an unbiased coordinate system is used for determining the wavelength dependence of reddening.

  14. Correcting Too Much or Too Little? The Performance of Three Chi-Square Corrections.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2015-01-01

    This simulation study investigates the performance of three test statistics, T1, T2, and T3, used to evaluate structural equation model fit under non normal data conditions. T1 is the well-known mean-adjusted statistic of Satorra and Bentler. T2 is the mean-and-variance adjusted statistic of Sattertwaithe type where the degrees of freedom is manipulated. T3 is a recently proposed version of T2 that does not manipulate degrees of freedom. Discrepancies between these statistics and their nominal chi-square distribution in terms of errors of Type I and Type II are investigated. All statistics are shown to be sensitive to increasing kurtosis in the data, with Type I error rates often far off the nominal level. Under excess kurtosis true models are generally over-rejected by T1 and under-rejected by T2 and T3, which have similar performance in all conditions. Under misspecification there is a loss of power with increasing kurtosis, especially for T2 and T3. The coefficient of variation of the nonzero eigenvalues of a certain matrix is shown to be a reliable indicator for the adequacy of these statistics.

  15. Optimization of Stripping Voltammetric Sensor by a Back Propagation Artificial Neural Network for the Accurate Determination of Pb(II) in the Presence of Cd(II).

    PubMed

    Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang

    2016-09-21

    An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.

  16. Mixed response and time-to-event endpoints for multistage single-arm phase II design.

    PubMed

    Lai, Xin; Zee, Benny Chung-Ying

    2015-06-04

    The objective of phase II cancer clinical trials is to determine if a treatment has sufficient activity to warrant further study. The efficiency of a conventional phase II trial design has been the object of considerable debate, particularly when the study regimen is characteristically cytostatic. At the time of development of a phase II cancer trial, we accumulated clinical experience regarding the time to progression (TTP) for similar classes of drugs and for standard therapy. By considering the time to event (TTE) in addition to the tumor response endpoint, a mixed-endpoint phase II design may increase the efficiency and ability of selecting promising cytotoxic and cytostatic agents for further development. We proposed a single-arm phase II trial design by extending the Zee multinomial method to fully use mixed endpoints with tumor response and the TTE. In this design, the dependence between the probability of response and the TTE outcome is modeled through a Gaussian copula. Given the type I and type II errors and the hypothesis as defined by the response rate (RR) and median TTE, such as median TTP, the decision rules for a two-stage phase II trial design can be generated. We demonstrated through simulation that the proposed design has a smaller expected sample size and higher early stopping probability under the null hypothesis than designs based on a single-response endpoint or a single TTE endpoint. The proposed design is more efficient for screening new cytotoxic or cytostatic agents and less likely to miss an effective agent than the alternative single-arm design.

  17. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).

  18. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  19. Herpetiform keratitis and palmoplantar hyperkeratosis: warning signs for Richner-Hanhart syndrome.

    PubMed

    Soares, Diogo C; Stroparo, Mariana N; Lian, Yu C; Takakura, Cristina Y; Wolf, Sabrina; Betz, Regina; Kim, Chong A

    2017-05-01

    Richner-Hanhart syndrome (RHS, tyrosinemia type II) is a rare, autosomal recessive inborn error of tyrosine metabolism caused by tyrosine aminotransferase deficiency. It is characterized by photophobia due to keratitis, painful palmoplantar hyperkeratosis, variable mental retardation, and elevated serum tyrosine levels. Patients are often misdiagnosed with herpes simplex keratitis. We report on a a boy from Brazil who presented with bilateral keratitis secondary to RHS, which had earlier been misdiagnosed as herpes simplex keratitis.

  20. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  1. 49 CFR Appendix F to Part 240 - Medical Standards Guidelines

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... greater guidance on the procedures that should be employed in administering the vision and hearing... more errors on plates 1-15. MULTIFUNCTION VISION TESTER Keystone Orthoscope Any error. OPTEC 2000 Any error. Titmus Vision Tester Any error. Titmus II Vision Tester Any error. (3) In administering any of...

  2. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  3. Surface roughness considerations for atmospheric correction of ocean color sensors. I - The Rayleigh-scattering component. II - Error in the retrieved water-leaving radiance

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Wang, Menghua

    1992-01-01

    The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  4. A risk-based approach to flood management decisions in a nonstationary world

    NASA Astrophysics Data System (ADS)

    Rosner, Ana; Vogel, Richard M.; Kirshen, Paul H.

    2014-03-01

    Traditional approaches to flood management in a nonstationary world begin with a null hypothesis test of "no trend" and its likelihood, with little or no attention given to the likelihood that we might ignore a trend if it really existed. Concluding a trend exists when it does not, or rejecting a trend when it exists are known as type I and type II errors, respectively. Decision-makers are poorly served by statistical and/or decision methods that do not carefully consider both over- and under-preparation errors, respectively. Similarly, little attention is given to how to integrate uncertainty in our ability to detect trends into a flood management decision context. We show how trend hypothesis test results can be combined with an adaptation's infrastructure costs and damages avoided to provide a rational decision approach in a nonstationary world. The criterion of expected regret is shown to be a useful metric that integrates the statistical, economic, and hydrological aspects of the flood management problem in a nonstationary world.

  5. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    PubMed

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  6. A shift from significance test to hypothesis test through power analysis in medical research.

    PubMed

    Singh, G

    2006-01-01

    Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.

  7. IMPROVEMENT OF SMVGEAR II ON VECTOR AND SCALAR MACHINES THROUGH ABSOLUTE ERROR TOLERANCE CONTROL (R823186)

    EPA Science Inventory

    The computer speed of SMVGEAR II was improved markedly on scalar and vector machines with relatively little loss in accuracy. The improvement was due to a method of frequently recalculating the absolute error tolerance instead of keeping it constant for a given set of chemistry. ...

  8. The Red Edge Problem in asteroid band parameter analysis

    NASA Astrophysics Data System (ADS)

    Lindsay, Sean S.; Dunn, Tasha L.; Emery, Joshua P.; Bowles, Neil E.

    2016-04-01

    Near-infrared reflectance spectra of S-type asteroids contain two absorptions at 1 and 2 μm (band I and II) that are diagnostic of mineralogy. A parameterization of these two bands is frequently employed to determine the mineralogy of S(IV) asteroids through the use of ordinary chondrite calibration equations that link the mineralogy to band parameters. The most widely used calibration study uses a Band II terminal wavelength point (red edge) at 2.50 μm. However, due to the limitations of the NIR detectors on prominent telescopes used in asteroid research, spectral data for asteroids are typically only reliable out to 2.45 μm. We refer to this discrepancy as "The Red Edge Problem." In this report, we evaluate the associated errors for measured band area ratios (BAR = Area BII/BI) and calculated relative abundance measurements. We find that the Red Edge Problem is often not the dominant source of error for the observationally limited red edge set at 2.45 μm, but it frequently is for a red edge set at 2.40 μm. The error, however, is one sided and therefore systematic. As such, we provide equations to adjust measured BARs to values with a different red edge definition. We also provide new ol/(ol+px) calibration equations for red edges set at 2.40 and 2.45 μm.

  9. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  10. Model Stellar Atmospheres and Real Stellar Atmospheres and Status of the ATLAS12 Opacity Sampling Program and of New Programs for Rosseland and for Distribution Function Opacity

    NASA Technical Reports Server (NTRS)

    Kurucz, Robert L.

    1996-01-01

    I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity, and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration. I have also developed a new opacity-sampling version of my model atmosphere program called ATLAS12. It recognizes more than 1000 atomic and molecular species, each in up to 10 isotopic forms. It can treat all ions of the elements up through Zn and the first 5 ions of heavier elements up through Es. The elemental and isotopic abundances are treated as variables with depth. The fluxes predicted by ATLAS12 are not accurate in intermediate or narrow bandpass intervals because the sample size is too small. A special stripped version of the spectrum synthesis program SYNTHE is used to generate the surface flux for the converged model using the line data on CD-ROMs 1 and 15. ATLAS12 can be used to produce improved models for Am and Ap stars. It should be very useful for investigating diffusion effects in atmospheres. It can be used to model exciting stars for H II regions with abundances consistent with those of the H II region. These programs and line files will be distributed on CD-ROMs.

  11. The effect of normalization of Partial Directed Coherence on the statistical assessment of connectivity patterns: a simulation study.

    PubMed

    Toppi, J; Petti, M; Vecchiato, G; Cincotti, F; Salinari, S; Mattia, D; Babiloni, F; Astolfi, L

    2013-01-01

    Partial Directed Coherence (PDC) is a spectral multivariate estimator for effective connectivity, relying on the concept of Granger causality. Even if its original definition derived directly from information theory, two modifies were introduced in order to provide better physiological interpretations of the estimated networks: i) normalization of the estimator according to rows, ii) squared transformation. In the present paper we investigated the effect of PDC normalization on the performances achieved by applying the statistical validation process on investigated connectivity patterns under different conditions of Signal to Noise ratio (SNR) and amount of data available for the analysis. Results of the statistical analysis revealed an effect of PDC normalization only on the percentages of type I and type II errors occurred by using Shuffling procedure for the assessment of connectivity patterns. No effects of the PDC formulation resulted on the performances achieved during the validation process executed instead by means of Asymptotic Statistic approach. Moreover, the percentages of both false positives and false negatives committed by Asymptotic Statistic are always lower than those achieved by Shuffling procedure for each type of normalization.

  12. Statistical considerations in evaluating pharmacogenomics-based clinical effect for confirmatory trials.

    PubMed

    Wang, Sue-Jane; O'Neill, Robert T; Hung, Hm James

    2010-10-01

    The current practice for seeking genomically favorable patients in randomized controlled clinical trials using genomic convenience samples. To discuss the extent of imbalance, confounding, bias, design efficiency loss, type I error, and type II error that can occur in the evaluation of the convenience samples, particularly when they are small samples. To articulate statistical considerations for a reasonable sample size to minimize the chance of imbalance, and, to highlight the importance of replicating the subgroup finding in independent studies. Four case examples reflecting recent regulatory experiences are used to underscore the problems with convenience samples. Probability of imbalance for a pre-specified subgroup is provided to elucidate sample size needed to minimize the chance of imbalance. We use an example drug development to highlight the level of scientific rigor needed, with evidence replicated for a pre-specified subgroup claim. The convenience samples evaluated ranged from 18% to 38% of the intent-to-treat samples with sample size ranging from 100 to 5000 patients per arm. The baseline imbalance can occur with probability higher than 25%. Mild to moderate multiple confounders yielding the same directional bias in favor of the treated group can make treatment group incomparable at baseline and result in a false positive conclusion that there is a treatment difference. Conversely, if the same directional bias favors the placebo group or there is loss in design efficiency, the type II error can increase substantially. Pre-specification of a genomic subgroup hypothesis is useful only for some degree of type I error control. Complete ascertainment of genomic samples in a randomized controlled trial should be the first step to explore if a favorable genomic patient subgroup suggests a treatment effect when there is no clear prior knowledge and understanding about how the mechanism of a drug target affects the clinical outcome of interest. When stratified randomization based on genomic biomarker status cannot be implemented in designing a pharmacogenomics confirmatory clinical trial, if there is one genomic biomarker prognostic for clinical response, as a general rule of thumb, a sample size of at least 100 patients may be needed to be considered for the lower prevalence genomic subgroup to minimize the chance of an imbalance of 20% or more difference in the prevalence of the genomic marker. The sample size may need to be at least 150, 350, and 1350, respectively, if an imbalance of 15%, 10% and 5% difference is of concern.

  13. Upset Characterization of the PowerPC405 Hard-core Processor Embedded in Virtex-II Pro Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Swift, Gary M.; Allen, Gregory S.; Farmanesh, Farhad; George, Jeffrey; Petrick, David J.; Chayab, Fayez

    2006-01-01

    Shown in this presentation are recent results for the upset susceptibility of the various types of memory elements in the embedded PowerPC405 in the Xilinx V2P40 FPGA. For critical flight designs where configuration upsets are mitigated effectively through appropriate design triplication and configuration scrubbing, these upsets of processor elements can dominate the system error rate. Data from irradiations with both protons and heavy ions are given and compared using available models.

  14. Coding for Efficient Image Transmission

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    NASA publication second in series on data-coding techniques for noiseless channels. Techniques used even in noisy channels, provided data further processed with Reed-Solomon or other error-correcting code. Techniques discussed in context of transmission of monochrome imagery from Voyager II spacecraft but applicable to other streams of data. Objective of this type coding to "compress" data; that is, to transmit using as few bits as possible by omitting as much as possible of portion of information repeated in subsequent samples (or picture elements).

  15. Near Misses in Financial Trading: Skills for Capturing and Averting Error.

    PubMed

    Leaver, Meghan; Griffiths, Alex; Reader, Tom

    2018-05-01

    The aims of this study were (a) to determine whether near-miss incidents in financial trading contain information on the operator skills and systems that detect and prevent near misses and the patterns and trends revealed by these data and (b) to explore if particular operator skills and systems are found as important for avoiding particular types of error on the trading floor. In this study, we examine a cohort of near-miss incidents collected from a financial trading organization using the Financial Incident Analysis System and report on the nontechnical skills and systems that are used to detect and prevent error in this domain. One thousand near-miss incidents are analyzed using distribution, mean, chi-square, and associative analysis to describe the data; reliability is provided. Slips/lapses (52%) and human-computer interface problems (21%) often occur alone and are the main contributors to error causation, whereas the prevention of error is largely a result of teamwork (65%) and situation awareness (46%) skills. No matter the cause of error, situation awareness and teamwork skills are used most often to detect and prevent the error. Situation awareness and teamwork skills appear universally important as a "last line" of defense for capturing error, and data from incident-monitoring systems can be analyzed in a fashion more consistent with a "Safety-II" approach. This research provides data for ameliorating risk within financial trading organizations, with implications for future risk management programs and regulation.

  16. Pharmacogenetic excitation of dorsomedial prefrontal cortex restores fear prediction error.

    PubMed

    Yau, Joanna Oi-Yue; McNally, Gavan P

    2015-01-07

    Pavlovian conditioning involves encoding the predictive relationship between a conditioned stimulus (CS) and an unconditioned stimulus, so that synaptic plasticity and learning is instructed by prediction error. Here we used pharmacogenetic techniques to show a causal relation between activity of rat dorsomedial prefrontal cortex (dmPFC) neurons and fear prediction error. We expressed the excitatory hM3Dq designer receptor exclusively activated by a designer drug (DREADD) in dmPFC and isolated actions of prediction error by using an associative blocking design. Rats were trained to fear the visual CS (CSA) in stage I via pairings with footshock. Then in stage II, rats received compound presentations of visual CSA and auditory CS (CSB) with footshock. This prior fear conditioning of CSA reduced the prediction error during stage II to block fear learning to CSB. The group of rats that received AAV-hSYN-eYFP vector that was treated with clozapine-N-oxide (CNO; 3 mg/kg, i.p.) before stage II showed blocking when tested in the absence of CNO the next day. In contrast, the groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were treated with CNO before stage II training did not show blocking; learning toward CSB was restored. This restoration of prediction error and fear learning was specific to the injection of CNO because groups that received AAV-hSYN-hM3Dq and AAV-CaMKIIα-hM3Dq that were injected with vehicle before stage II training did show blocking. These effects were not attributable to the DREADD manipulation enhancing learning or arousal, increasing fear memory strength or asymptotic levels of fear learning, or altering fear memory retrieval. Together, these results identify a causal role for dmPFC in a signature of adaptive behavior: using the past to predict future danger and learning from errors in these predictions. Copyright © 2015 the authors 0270-6474/15/350074-10$15.00/0.

  17. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  18. [Effect of Mn(II) on the error-prone DNA polymerase iota activity in extracts from human normal and tumor cells].

    PubMed

    Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V

    2013-01-01

    The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.

  19. The statistical pitfalls of the partially randomized preference design in non-blinded trials of psychological interventions.

    PubMed

    Gemmell, Isla; Dunn, Graham

    2011-03-01

    In a partially randomized preference trial (PRPT) patients with no treatment preference are allocated to groups at random, but those who express a preference receive the treatment of their choice. It has been suggested that the design can improve the external and internal validity of trials. We used computer simulation to illustrate the impact that an unmeasured confounder could have on the results and conclusions drawn from a PRPT. We generated 4000 observations ("patients") that reflected the distribution of the Beck Depression Index (DBI) in trials of depression. Half were randomly assigned to a randomized controlled trial (RCT) design and half were assigned to a PRPT design. In the RCT, "patients" were evenly split between treatment and control groups; whereas in the preference arm, to reflect patient choice, 87.5% of patients were allocated to the experimental treatment and 12.5% to the control. Unadjusted analyses of the PRPT data consistently overestimated the treatment effect and its standard error. This lead to Type I errors when the true treatment effect was small and Type II errors when the confounder effect was large. The PRPT design is not recommended as a method of establishing an unbiased estimate of treatment effect due to the potential influence of unmeasured confounders. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Demonstration of a viable quantitative theory for interplanetary type II radio bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, J. M., E-mail: jschmidt@physics.usyd.edu.au; Cairns, Iver H.

    Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME andmore » plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 10{sup 6} and ≈ 10{sup 3}, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth’s magnetosphere and drive space weather events.« less

  1. Demonstration of a viable quantitative theory for interplanetary type II radio bursts

    NASA Astrophysics Data System (ADS)

    Schmidt, J. M.; Cairns, Iver H.

    2016-03-01

    Between 29 November and 1 December 2013 the two widely separated spacecraft STEREO A and B observed a long lasting, intermittent, type II radio burst for the extended frequency range ≈ 4 MHz to 30 kHz, including an intensification when the shock wave of the associated coronal mass ejection (CME) reached STEREO A. We demonstrate for the first time our ability to quantitatively and accurately simulate the fundamental (F) and harmonic (H) emission of type II bursts from the higher corona (near 11 solar radii) to 1 AU. Our modeling requires the combination of data-driven three-dimensional magnetohydrodynamic simulations for the CME and plasma background, carried out with the BATS-R-US code, with an analytic quantitative kinetic model for both F and H radio emission, including the electron reflection at the shock, growth of Langmuir waves and radio waves, and the radiations propagation to an arbitrary observer. The intensities and frequencies of the observed radio emissions vary hugely by factors ≈ 106 and ≈ 103, respectively; the theoretical predictions are impressively accurate, being typically in error by less than a factor of 10 and 20 %, for both STEREO A and B. We also obtain accurate predictions for the timing and characteristics of the shock and local radio onsets at STEREO A, the lack of such onsets at STEREO B, and the z-component of the magnetic field at STEREO A ahead of the shock, and in the sheath. Very strong support is provided by these multiple agreements for the theory, the efficacy of the BATS-R-US code, and the vision of using type IIs and associated data-theory iterations to predict whether a CME will impact Earth's magnetosphere and drive space weather events.

  2. Impact of Representing Model Error in a Hybrid Ensemble-Variational Data Assimilation System for Track Forecast of Tropical Cyclones over the Bay of Bengal

    NASA Astrophysics Data System (ADS)

    Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.

    2018-03-01

    Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.

  3. Surface-roughness considerations for atmospheric correction of ocean color sensors. II: Error in the retrieved water-leaving radiance.

    PubMed

    Gordon, H R; Wang, M

    1992-07-20

    In the algorithm for the atmospheric correction of coastal zone color scanner (CZCS) imagery, it is assumed that the sea surface is flat. Simulations are carried out to assess the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct Sun glitter (either a large solar zenith angle or the sensor tilted away from the specular image of the Sun), the following conclusions appear justified: (1) the error induced by ignoring the surface roughness is less, similar1 CZCS digital count for wind speeds up to approximately 17 m/s, and therefore can be ignored for this sensor; (2) the roughness-induced error is much more strongly dependent on the wind speed than on the wave shadowing, suggesting that surface effects can be adequately dealt with without precise knowledge of the shadowing; and (3) the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness, suggesting that in refining algorithms for future sensors more effort should be placed on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.

  4. Rare high-impact disease variants: properties and identifications.

    PubMed

    Park, Leeyoung; Kim, Ju Han

    2016-03-21

    Although many genome-wide association studies have been performed, the identification of disease polymorphisms remains important. It is now suspected that many rare disease variants induce the association signal of common variants in linkage disequilibrium (LD). Based on recent development of genetic models, the current study provides explanations of the existence of rare variants with high impacts and common variants with low impacts. Disease variants are neither necessary nor sufficient due to gene-gene or gene-environment interactions. A new method was developed based on theoretical aspects to identify both rare and common disease variants by their genotypes. Common disease variants were identified with relatively small odds ratios and relatively small sample sizes, except for specific situations in which the disease variants were in strong LD with a variant with a higher frequency. Rare disease variants with small impacts were difficult to identify without increasing sample sizes; however, the method was reasonably accurate for rare disease variants with high impacts. For rare variants, dominant variants generally showed better Type II error rates than recessive variants; however, the trend was reversed for common variants. Type II error rates increased in gene regions containing more than two disease variants because the more common variant, rather than both disease variants, was usually identified. The proposed method would be useful for identifying common disease variants with small impacts and rare disease variants with large impacts when disease variants have the same effects on disease presentation.

  5. Journal news

    USGS Publications Warehouse

    Conroy, M.J.; Samuel, M.D.; White, Joanne C.

    1995-01-01

    Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.

  6. Refining new-physics searches in B→Dτν with lattice QCD.

    PubMed

    Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran

    2012-08-17

    The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).

  7. The underreporting of medication errors: A retrospective and comparative root cause analysis in an acute mental health unit over a 3-year period.

    PubMed

    Morrison, Maeve; Cope, Vicki; Murray, Melanie

    2018-05-15

    Medication errors remain a commonly reported clinical incident in health care as highlighted by the World Health Organization's focus to reduce medication-related harm. This retrospective quantitative analysis examined medication errors reported by staff using an electronic Clinical Incident Management System (CIMS) during a 3-year period from April 2014 to April 2017 at a metropolitan mental health ward in Western Australia. The aim of the project was to identify types of medication errors and the context in which they occur and to consider recourse so that medication errors can be reduced. Data were retrieved from the Clinical Incident Management System database and concerned medication incidents from categorized tiers within the system. Areas requiring improvement were identified, and the quality of the documented data captured in the database was reviewed for themes pertaining to medication errors. Content analysis provided insight into the following issues: (i) frequency of problem, (ii) when the problem was detected, and (iii) characteristics of the error (classification of drug/s, where the error occurred, what time the error occurred, what day of the week it occurred, and patient outcome). Data were compared to the state-wide results published in the Your Safety in Our Hands (2016) report. Results indicated several areas upon which quality improvement activities could be focused. These include the following: structural changes; changes to policy and practice; changes to individual responsibilities; improving workplace culture to counteract underreporting of medication errors; and improvement in safety and quality administration of medications within a mental health setting. © 2018 Australian College of Mental Health Nurses Inc.

  8. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  9. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  10. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  11. Archie's law - a reappraisal

    NASA Astrophysics Data System (ADS)

    Glover, Paul W. J.

    2016-07-01

    When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.

  12. Several Modified Goodness-Of-Fit Tests for the Cauchy Distribution with Unknown Scale and Location Parameters

    DTIC Science & Technology

    1994-03-01

    labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would

  13. TH-CD-202-06: A Method for Characterizing and Validating Dynamic Lung Density Change During Quiet Respiration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dou, T; Ruan, D; Heinrich, M

    2016-06-15

    Purpose: To obtain a functional relationship that calibrates the lung tissue density change under free breathing conditions through correlating Jacobian values to the Hounsfield units. Methods: Free-breathing lung computed tomography images were acquired using a fast helical CT protocol, where 25 scans were acquired per patient. Using a state-of-the-art deformable registration algorithm, a set of the deformation vector fields (DVF) was generated to provide spatial mapping from the reference image geometry to the other free-breathing scans. These DVFs were used to generate Jacobian maps, which estimate voxelwise volume change. Subsequently, the set of 25 corresponding Jacobian and voxel intensity inmore » Hounsfield units (HU) were collected and linear regression was performed based on the mass conservation relationship to correlate the volume change to density change. Based on the resulting fitting coefficients, the tissues were classified into parenchymal (Type I), vascular (Type II), and soft tissue (Type III) types. These coefficients modeled the voxelwise density variation during quiet breathing. The accuracy of the proposed method was assessed using mean absolute difference in HU between the CT scan intensities and the model predicted values. In addition, validation experiments employing a leave-five-out method were performed to evaluate the model accuracy. Results: The computed mean model errors were 23.30±9.54 HU, 29.31±10.67 HU, and 35.56±20.56 HU, respectively, for regions I, II, and III, respectively. The cross validation experiments averaged over 100 trials had mean errors of 30.02 ± 1.67 HU over the entire lung. These mean values were comparable with the estimated CT image background noise. Conclusion: The reported validation experiment statistics confirmed the lung density modeling during free breathing. The proposed technique was general and could be applied to a wide range of problem scenarios where accurate dynamic lung density information is needed. This work was supported in part by NIH R01 CA0096679.« less

  14. Sustained attention deficits among HIV-positive individuals with comorbid bipolar disorder.

    PubMed

    Posada, Carolina; Moore, David J; Deutsch, Reena; Rooney, Alexandra; Gouaux, Ben; Letendre, Scott; Grant, Igor; Atkinson, J Hampton

    2012-01-01

    Difficulties with sustained attention have been found among both persons with HIV infection (HIV+) and bipolar disorder (BD). The authors examined sustained attention among 39 HIV+ individuals with BD (HIV+/BD+) and 33 HIV-infected individuals without BD (HIV+/BD-), using the Conners' Continuous Performance Test-II (CPT-II). A Global Assessment of Functioning (GAF) score was also assigned to each participant as an overall indicator of daily functioning abilities. HIV+/BD+ participants had significantly worse performance on CPT-II omission errors, hit reaction time SE (Hit RT SE), variability of SE, and perseverations than HIV+/BD- participants. When examining CPT-II performance over the six study blocks, both HIV+/BD+ and HIV+/BD- participants evidenced worse performance on scores of commission errors and reaction times as the test progressed. The authors also examined the effect of current mood state (i.e., manic, depressive, euthymic) on CPT-II performance, but no significant differences were observed across the various mood states. HIV+/BD+ participants had significantly worse GAF scores than HIV+/BD- participants, which indicates poorer overall functioning in the dually-affected group; among HIV+/BD+ persons, significant negative correlations were found between GAF scores and CPT-II omission and commission errors, detectability, and perseverations, indicating a possible relationship between decrements in sustained attention and worse daily-functioning outcomes.

  15. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  16. An iterative truncation method for unbounded electromagnetic problems using varying order finite elements

    NASA Astrophysics Data System (ADS)

    Paul, Prakash

    2009-12-01

    The finite element method (FEM) is used to solve three-dimensional electromagnetic scattering and radiation problems. Finite element (FE) solutions of this kind contain two main types of error: discretization error and boundary error. Discretization error depends on the number of free parameters used to model the problem, and on how effectively these parameters are distributed throughout the problem space. To reduce the discretization error, the polynomial order of the finite elements is increased, either uniformly over the problem domain or selectively in those areas with the poorest solution quality. Boundary error arises from the condition applied to the boundary that is used to truncate the computational domain. To reduce the boundary error, an iterative absorbing boundary condition (IABC) is implemented. The IABC starts with an inexpensive boundary condition and gradually improves the quality of the boundary condition as the iteration continues. An automatic error control (AEC) is implemented to balance the two types of error. With the AEC, the boundary condition is improved when the discretization error has fallen to a low enough level to make this worth doing. The AEC has these characteristics: (i) it uses a very inexpensive truncation method initially; (ii) it allows the truncation boundary to be very close to the scatterer/radiator; (iii) it puts more computational effort on the parts of the problem domain where it is most needed; and (iv) it can provide as accurate a solution as needed depending on the computational price one is willing to pay. To further reduce the computational cost, disjoint scatterers and radiators that are relatively far from each other are bounded separately and solved using a multi-region method (MRM), which leads to savings in computational cost. A simple analytical way to decide whether the MRM or the single region method will be computationally cheaper is also described. To validate the accuracy and savings in computation time, different shaped metallic and dielectric obstacles (spheres, ogives, cube, flat plate, multi-layer slab etc.) are used for the scattering problems. For the radiation problems, waveguide excited antennas (horn antenna, waveguide with flange, microstrip patch antenna) are used. Using the AEC the peak reduction in computation time during the iteration is typically a factor of 2, compared to the IABC using the same element orders throughout. In some cases, it can be as high as a factor of 4.

  17. Patterns of Strengths and Weaknesses on the WISC-V, DAS-II, and KABC-II and Their Relationship to Students' Errors in Oral Language, Reading, Writing, Spelling, and Math

    ERIC Educational Resources Information Center

    Breaux, Kristina C.; Avitia, Maria; Koriakin, Taylor; Bray, Melissa A.; DeBiase, Emily; Courville, Troy; Pan, Xingyu; Witholt, Thomas; Grossman, Sandy

    2017-01-01

    This study investigated the relationship between specific cognitive patterns of strengths and weaknesses and the errors children make on oral language, reading, writing, spelling, and math subtests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants with scores from the KTEA-3 and either the Wechsler Intelligence…

  18. The relationship between somatic and cognitive-affective depression symptoms and error-related ERP’s

    PubMed Central

    Bridwell, David A.; Steele, Vaughn R.; Maurer, J. Michael; Kiehl, Kent A.; Calhoun, Vince D.

    2014-01-01

    Background The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck’s Depression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Methods Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERP’s were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. Results We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). Limitations These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. Conclusions These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. PMID:25451400

  19. The relationship between somatic and cognitive-affective depression symptoms and error-related ERPs.

    PubMed

    Bridwell, David A; Steele, Vaughn R; Maurer, J Michael; Kiehl, Kent A; Calhoun, Vince D

    2015-02-01

    The symptoms that contribute to the clinical diagnosis of depression likely emerge from, or are related to, underlying cognitive deficits. To understand this relationship further, we examined the relationship between self-reported somatic and cognitive-affective Beck'sDepression Inventory-II (BDI-II) symptoms and aspects of cognitive control reflected in error event-related potential (ERP) responses. Task and assessment data were analyzed within 51 individuals. The group contained a broad distribution of depressive symptoms, as assessed by BDI-II scores. ERPs were collected following error responses within a go/no-go task. Individual error ERP amplitudes were estimated by conducting group independent component analysis (ICA) on the electroencephalographic (EEG) time series and analyzing the individual reconstructed source epochs. Source error amplitudes were correlated with the subset of BDI-II scores representing somatic and cognitive-affective symptoms. We demonstrate a negative relationship between somatic depression symptoms (i.e. fatigue or loss of energy) (after regressing out cognitive-affective scores, age and IQ) and the central-parietal ERP response that peaks at 359 ms. The peak amplitudes within this ERP response were not significantly related to cognitive-affective symptom severity (after regressing out the somatic symptom scores, age, and IQ). These findings were obtained within a population of female adults from a maximum-security correctional facility. Thus, additional research is required to verify that they generalize to the broad population. These results suggest that individuals with greater somatic depression symptoms demonstrate a reduced awareness of behavioral errors, and help clarify the relationship between clinical measures of self-reported depression symptoms and cognitive control. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking.

    PubMed

    Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia

    2017-01-01

    Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.

  1. Estimating the designated use attainment decision error rates of US Environmental Protection Agency's proposed numeric total phosphorus criteria for Florida, USA, colored lakes.

    PubMed

    McLaughlin, Douglas B

    2012-01-01

    The utility of numeric nutrient criteria established for certain surface waters is likely to be affected by the uncertainty that exists in the presence of a causal link between nutrient stressor variables and designated use-related biological responses in those waters. This uncertainty can be difficult to characterize, interpret, and communicate to a broad audience of environmental stakeholders. The US Environmental Protection Agency (USEPA) has developed a systematic planning process to support a variety of environmental decisions, but this process is not generally applied to the development of national or state-level numeric nutrient criteria. This article describes a method for implementing such an approach and uses it to evaluate the numeric total P criteria recently proposed by USEPA for colored lakes in Florida, USA. An empirical, log-linear relationship between geometric mean concentrations of total P (a potential stressor variable) and chlorophyll a (a nutrient-related response variable) in these lakes-that is assumed to be causal in nature-forms the basis for the analysis. The use of the geometric mean total P concentration of a lake to correctly indicate designated use status, defined in terms of a 20 µg/L geometric mean chlorophyll a threshold, is evaluated. Rates of decision errors analogous to the Type I and Type II error rates familiar in hypothesis testing, and a 3rd error rate, E(ni) , referred to as the nutrient criterion-based impairment error rate, are estimated. The results show that USEPA's proposed "baseline" and "modified" nutrient criteria approach, in which data on both total P and chlorophyll a may be considered in establishing numeric nutrient criteria for a given lake within a specified range, provides a means for balancing and minimizing designated use attainment decision errors. Copyright © 2011 SETAC.

  2. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  3. ROC analysis of the accuracy of Noncycloplegic retinoscopy, Retinomax Autorefractor, and SureSight Vision Screener for preschool vision screening.

    PubMed

    Ying, Gui-shuang; Maguire, Maureen; Quinn, Graham; Kulp, Marjean Taylor; Cyert, Lynn

    2011-12-28

    To evaluate, by receiver operating characteristic (ROC) analysis, the accuracy of three instruments of refractive error in detecting eye conditions among 3- to 5-year-old Head Start preschoolers and to evaluate differences in accuracy between instruments and screeners and by age of the child. Children participating in the Vision In Preschoolers (VIP) Study (n = 4040), had screening tests administered by pediatric eye care providers (phase I) or by both nurse and lay screeners (phase II). Noncycloplegic retinoscopy (NCR), the Retinomax Autorefractor (Nikon, Tokyo, Japan), and the SureSight Vision Screener (SureSight, Alpharetta, GA) were used in phase I, and Retinomax and SureSight were used in phase II. Pediatric eye care providers performed a standardized eye examination to identify amblyopia, strabismus, significant refractive error, and reduced visual acuity. The accuracy of the screening tests was summarized by the area under the ROC curve (AUC) and compared between instruments and screeners and by age group. The three screening tests had a high AUC for all categories of screening personnel. The AUC for detecting any VIP-targeted condition was 0.83 for NCR, 0.83 (phase I) to 0.88 (phase II) for Retinomax, and 0.86 (phase I) to 0.87 (phase II) for SureSight. The AUC was 0.93 to 0.95 for detecting group 1 (most severe) conditions and did not differ between instruments or screeners or by age of the child. NCR, Retinomax, and SureSight had similar and high accuracy in detecting vision disorders in preschoolers across all types of screeners and age of child, consistent with previously reported results at specificity levels of 90% and 94%.

  4. Phase II design with sequential testing of hypotheses within each stage.

    PubMed

    Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania

    2014-01-01

    The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.

  5. Selecting a Separable Parametric Spatiotemporal Covariance Structure for Longitudinal Imaging Data

    PubMed Central

    George, Brandon; Aban, Inmaculada

    2014-01-01

    Longitudinal imaging studies allow great insight into how the structure and function of a subject’s internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures, and the spatial from the outcomes of interest being observed at multiple points in a patients body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on Type I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the Type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be done in practice, as well as how covariance structure choice can change inferences about fixed effects. PMID:25293361

  6. Monte Carlo Simulations Comparing Fisher Exact Test and Unequal Variances t Test for Analysis of Differences Between Groups in Brief Hospital Lengths of Stay.

    PubMed

    Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U

    2017-12-01

    We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.

  7. Identification and characterization of mutant clones with enhanced propagation rates from phage-displayed peptide libraries.

    PubMed

    Nguyen, Kieu T H; Adamkiewicz, Marta A; Hebert, Lauren E; Zygiel, Emily M; Boyle, Holly R; Martone, Christina M; Meléndez-Ríos, Carola B; Noren, Karen A; Noren, Christopher J; Hall, Marilena Fitzsimons

    2014-10-01

    A target-unrelated peptide (TUP) can arise in phage display selection experiments as a result of a propagation advantage exhibited by the phage clone displaying the peptide. We previously characterized HAIYPRH, from the M13-based Ph.D.-7 phage display library, as a propagation-related TUP resulting from a G→A mutation in the Shine-Dalgarno sequence of gene II. This mutant was shown to propagate in Escherichia coli at a dramatically faster rate than phage bearing the wild-type Shine-Dalgarno sequence. We now report 27 additional fast-propagating clones displaying 24 different peptides and carrying 14 unique mutations. Most of these mutations are found either in or upstream of the gene II Shine-Dalgarno sequence, but still within the mRNA transcript of gene II. All 27 clones propagate at significantly higher rates than normal library phage, most within experimental error of wild-type M13 propagation, suggesting that mutations arise to compensate for the reduced virulence caused by the insertion of a lacZα cassette proximal to the replication origin of the phage used to construct the library. We also describe an efficient and convenient assay to diagnose propagation-related TUPS among peptide sequences selected by phage display. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. BOP2: Bayesian optimal design for phase II clinical trials with simple and complex endpoints.

    PubMed

    Zhou, Heng; Lee, J Jack; Yuan, Ying

    2017-09-20

    We propose a flexible Bayesian optimal phase II (BOP2) design that is capable of handling simple (e.g., binary) and complicated (e.g., ordinal, nested, and co-primary) endpoints under a unified framework. We use a Dirichlet-multinomial model to accommodate different types of endpoints. At each interim, the go/no-go decision is made by evaluating a set of posterior probabilities of the events of interest, which is optimized to maximize power or minimize the number of patients under the null hypothesis. Unlike other existing Bayesian designs, the BOP2 design explicitly controls the type I error rate, thereby bridging the gap between Bayesian designs and frequentist designs. In addition, the stopping boundary of the BOP2 design can be enumerated prior to the onset of the trial. These features make the BOP2 design accessible to a wide range of users and regulatory agencies and particularly easy to implement in practice. Simulation studies show that the BOP2 design has favorable operating characteristics with higher power and lower risk of incorrectly terminating the trial than some existing Bayesian phase II designs. The software to implement the BOP2 design is freely available at www.trialdesign.org. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. The advanced receiver 2: Telemetry test results in CTA 21

    NASA Technical Reports Server (NTRS)

    Hinedi, S.; Bevan, R.; Marina, M.

    1991-01-01

    Telemetry tests with the Advanced Receiver II (ARX II) in Compatibility Test Area 21 are described. The ARX II was operated in parallel with a Block-III Receiver/baseband processor assembly combination (BLK-III/BPA) and a Block III Receiver/subcarrier demodulation assembly/symbol synchronization assembly combination (BLK-III/SDA/SSA). The telemetry simulator assembly provided the test signal for all three configurations, and the symbol signal to noise ratio as well as the symbol error rates were measured and compared. Furthermore, bit error rates were also measured by the system performance test computer for all three systems. Results indicate that the ARX-II telemetry performance is comparable and sometimes superior to the BLK-III/BPA and BLK-III/SDA/SSA combinations.

  10. Contrast sensitivity and its determinants in people with diabetes: SN-DREAMS-II, Report No 6

    PubMed Central

    Gella, L; Raman, R; Pal, S S; Ganesan, S; Sharma, T

    2017-01-01

    Purpose To assess contrast sensitivity (CS) and to elucidate the factors associated with CS among subjects with type 2 diabetes in a cross-sectional population-based study. Patients and methods Subjects were recruited from a follow-up cohort, Sankara Nethralaya Diabetic Retinopathy Epidemiology and Molecular genetics Study (SN-DREAMS II). Of 958 subjects who were followed up in SN-DREAMS II, a subset of 653 subjects was included in the analysis. All subjects underwent a comprehensive eye examination, which included CS assessment using the Pelli–Robson chart. The cross-sectional association between CS and independent variables was assessed using stepwise linear regression analysis. A P-value of <0.05 was considered statistically significant. Results The mean age of the study sample was 58.7±9.41 (44–87) years. Mean CS of the study sample was 1.32±0.20 (range: 0–1.65) log units. CS was negatively and significantly correlated with age, duration of diabetes, hemoglobin level, vibration perception threshold (VPT) value, albuminuria, best corrected visual acuity (BCVA), refractive error, total error score (TEM) of FM 100 hue test, and mean retinal sensitivity. In multiple regression analysis, after adjusting for all the related factors, CS was significantly associated with BCVA (β=−0.575; P<0.001), VPT (β=−0.003; P=0.010), severity of cataract (β=−0.018; P=0.032), diabetic retinopathy (β=−0.016; P=0.019), and age (β=−0.002; P=0.029). These factors explained about 29.3% of the variation in CS. Conclusion Among the factors evaluated, differences in BCVA were associated with the largest predicted differences in CS. This association of CS with visual acuity highlights the important role of visual assessment in type 2 diabetes. PMID:27858934

  11. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  12. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  13. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  14. Testicular gonadotropin-releasing hormone II receptor (GnRHR-II) knockdown constitutively impairs diurnal testosterone secretion in the boar

    USDA-ARS?s Scientific Manuscript database

    The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are highly expressed in the testis, suggesting an important role in testis biology. Gene coding errors prevent the production of GnRH-II and GnRHR-II in many species, but both genes are functional in swine. We have demo...

  15. Analysis for nickel (3 and 4) in positive plates from nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Lewis, Harlan L.

    1994-01-01

    The NASA-Goddard procedure for destructive physical analysis (DPA) of nickel-cadmium cells contains a method for analysis of residual charged nickel as NiOOH in the positive plates at complete cell discharge, also known as nickel precharge. In the method, the Ni(III) is treated with an excess of an Fe(II) reducing agent and then back titrated with permanganate. The Ni(III) content is the difference between Fe(II) equivalents and permanganate equivalents. Problems have arisen in analysis at NAVSURFWARCENDIV, Crane because for many types of cells, particularly AA-size and some 'space-qualified' cells, zero or negative Ni(III) contents are recorded for which the manufacturer claims 3-5 percent precharge. Our approach to this problem was to reexamine the procedure for the source of error, and correct it or develop an alternative method.

  16. Entangled quantum key distribution over two free-space optical links.

    PubMed

    Erven, C; Couteau, C; Laflamme, R; Weihs, G

    2008-10-13

    We report on the first real-time implementation of a quantum key distribution (QKD) system using entangled photon pairs that are sent over two free-space optical telescope links. The entangled photon pairs are produced with a type-II spontaneous parametric down-conversion source placed in a central, potentially untrusted, location. The two free-space links cover a distance of 435 m and 1,325 m respectively, producing a total separation of 1,575 m. The system relies on passive polarization analysis units, GPS timing receivers for synchronization, and custom written software to perform the complete QKD protocol including error correction and privacy amplification. Over 6.5 hours during the night, we observed an average raw key generation rate of 565 bits/s, an average quantum bit error rate (QBER) of 4.92%, and an average secure key generation rate of 85 bits/s.

  17. Correlation and agreement between eplet mismatches calculated using serological, low-intermediate and high resolution molecular human leukocyte antigen typing methods.

    PubMed

    Fidler, Samantha; D'Orsogna, Lloyd; Irish, Ashley B; Lewis, Joshua R; Wong, Germaine; Lim, Wai H

    2018-03-02

    Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960-0.975) and 0.926 (95% CI 0.899-0.944), respectively; and 0.995 (95% CI 0.994-0.996) and 0.993 (95% CI 0.991-0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation.

  18. Accuracy of the lattice Boltzmann method for describing the behavior of a gas in the continuum limit.

    PubMed

    Kataoka, Takeshi; Tsutahara, Michihisa

    2010-11-01

    The accuracy of the lattice Boltzmann method (LBM) for describing the behavior of a gas in the continuum limit is systematically investigated. The asymptotic analysis for small Knudsen numbers is carried out to derive the corresponding fluid-dynamics-type equations, and the errors of the LBM are estimated by comparing them with the correct fluid-dynamics-type equations. We discuss the following three important cases: (I) the Mach number of the flow is much smaller than the Knudsen number, (II) the Mach number is of the same order as the Knudsen number, and (III) the Mach number is finite. From the von Karman relation, the above three cases correspond to the flows of (I) small Reynolds number, (II) finite Reynolds number, and (III) large Reynolds number, respectively. The analysis is made with the information only of the fundamental properties of the lattice Boltzmann models without stepping into their detailed form. The results are therefore applicable to various lattice Boltzmann models that satisfy the fundamental properties used in the analysis.

  19. An extension to artifact-free projection overlaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jianyu, E-mail: jianyulin@hotmail.com

    2015-05-15

    Purpose: In multipinhole single photon emission computed tomography, the overlapping of projections has been used to increase sensitivity. Avoiding artifacts in the reconstructed image associated with projection overlaps (multiplexing) is a critical issue. In our previous report, two types of artifact-free projection overlaps, i.e., projection overlaps that do not lead to artifacts in the reconstructed image, were formally defined and proved, and were validated via simulations. In this work, a new proposition is introduced to extend the previously defined type-II artifact-free projection overlaps so that a broader range of artifact-free overlaps is accommodated. One practical purpose of the new extensionmore » is to design a baffle window multipinhole system with artifact-free projection overlaps. Methods: First, the extended type-II artifact-free overlap was theoretically defined and proved. The new proposition accommodates the situation where the extended type-II artifact-free projection overlaps can be produced with incorrectly reconstructed portions in the reconstructed image. Next, to validate the theory, the extended-type-II artifact-free overlaps were employed in designing the multiplexing multipinhole spiral orbit imaging systems with a baffle window. Numerical validations were performed via simulations, where the corresponding 1-pinhole nonmultiplexing reconstruction results were used as the benchmark for artifact-free reconstructions. The mean square error (MSE) was the metric used for comparisons of noise-free reconstructed images. Noisy reconstructions were also performed as part of the validations. Results: Simulation results show that for noise-free reconstructions, the MSEs of the reconstructed images of the artifact-free multiplexing systems are very similar to those of the corresponding 1-pinhole systems. No artifacts were observed in the reconstructed images. Therefore, the testing results for artifact-free multiplexing systems designed using the extended type-II artifact-free overlaps numerically validated the developed theory. Conclusions: First, the extension itself is of theoretical importance because it broadens the selection range for optimizing multiplexing multipinhole designs. Second, the extension has an immediate application: using a baffle window to design a special spiral orbit multipinhole imaging system with projection overlaps in the orbit axial direction. Such an artifact-free baffle window design makes it possible for us to image any axial portion of interest of a long object with projection overlaps to increase sensitivity.« less

  20. Aeronautic Instruments. Section V : Power Plant Instruments

    NASA Technical Reports Server (NTRS)

    Washburn, G E; Sylvander, R C; Mueller, E F; Wilhelm, R M; Eaton, H N; Warner, John A C

    1923-01-01

    Part 1 gives a general discussion of the uses, principles, construction, and operation of airplane tachometers. Detailed description of all available instruments, both foreign and domestic, are given. Part 2 describes methods of tests and effect of various conditions encountered in airplane flight such as change of temperature, vibration, tilting, and reduced air pressure. Part 3 describes the principal types of distance reading thermometers for aircraft engines, including an explanation of the physical principles involved in the functioning of the instruments and proper filling of the bulbs. Performance requirements and testing methods are given and a discussion of the source of error and results of tests. Part 4 gives methods of tests and calibration, also requirements of gauges of this type for the pressure measurement of the air pressure in gasoline tanks and the engine oil pressure on airplanes. Part 5 describes two types of gasoline gauges, the float type and the pressure type. Methods of testing and calibrating gasoline depth gauges are given. The Schroeder, R. A. E., and the Mark II flowmeters are described.

  1. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the primary acceptance criterion'' in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  2. Review of reactor pressure vessel evaluation report for Yankee Rowe Nuclear Power Station (YAEC No. 1735)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheverton, R.D.; Dickson, T.L.; Merkle, J.G.

    1992-03-01

    The Yankee Atomic Electric Company has performed an Integrated Pressurized Thermal Shock (IPTS)-type evaluation of the Yankee Rowe reactor pressure vessel in accordance with the PTS Rule (10 CFR 50. 61) and a US Regulatory Guide 1.154. The Oak Ridge National Laboratory (ORNL) reviewed the YAEC document and performed an independent probabilistic fracture-mechnics analysis. The review included a comparison of the Pacific Northwest Laboratory (PNL) and the ORNL probabilistic fracture-mechanics codes (VISA-II and OCA-P, respectively). The review identified minor errors and one significant difference in philosophy. Also, the two codes have a few dissimilar peripheral features. Aside from these differences,more » VISA-II and OCA-P are very similar and with errors corrected and when adjusted for the difference in the treatment of fracture toughness distribution through the wall, yield essentially the same value of the conditional probability of failure. The ORNL independent evaluation indicated RT{sub NDT} values considerably greater than those corresponding to the PTS-Rule screening criteria and a frequency of failure substantially greater than that corresponding to the ``primary acceptance criterion`` in US Regulatory Guide 1.154. Time constraints, however, prevented as rigorous a treatment as the situation deserves. Thus, these results are very preliminary.« less

  3. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    PubMed

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.

    PubMed

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.

  5. 7 CFR 275.23 - Determination of State agency program performance.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...

  6. 78 FR 39730 - Medicare Program; Notification of Closure of Teaching Hospitals and Opportunity To Apply for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-02

    ..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...

  7. Group sequential designs for stepped-wedge cluster randomised trials

    PubMed Central

    Grayling, Michael J; Wason, James MS; Mander, Adrian P

    2017-01-01

    Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550

  8. Group sequential designs for stepped-wedge cluster randomised trials.

    PubMed

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.

  9. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. In addition, a comparison of observations of RE during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  10. Impact of Uncertainties and Errors in Converting NWS Radiosonde Hygristor Resistances to Relative Humidity

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.; Russell, Philip B. (Technical Monitor)

    1994-01-01

    A set of 2,600 6-second, National Weather Service soundings from NASA's FIRE-II Cirrus field experiment are used to illustrate previously known errors and new potential errors in the VIZ and SDD ) brand relative humidity (RH) sensors and the MicroART processing software. The entire spectrum of RH is potentially affected by at least one of these errors. (These errors occur before being converted to dew point temperature.) Corrections to the errors are discussed. Examples are given of the effect that these errors and biases may have on numerical weather prediction and radiative transfer. The figure shows the OLR calculated for the corrected and uncorrected soundings using an 18-band radiative transfer code. The OLR differences are sufficiently large to warrant consideration when validating line-by-line radiation calculations that use radiosonde data to specify the atmospheric state, or when validating satellite retrievals. in addition, a comparison of observations of RH during FIRE-II derived from GOES satellite, raman lidar, MAPS analyses, NCAR CLASS sondes, and the NWS sondes reveals disagreement in the RH distribution and underlines our lack of an understanding of the climatology of water vapor.

  11. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    PubMed

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  12. Input selection and performance optimization of ANN-based streamflow forecasts in the drought-prone Murray Darling Basin region using IIS and MODWT algorithm

    NASA Astrophysics Data System (ADS)

    Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek

    2017-11-01

    Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.

  13. Erratum: The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions

    NASA Astrophysics Data System (ADS)

    Pickett, Brian K.; Cassen, Patrick; Durisen, Richard H.; Link, Robert

    2000-02-01

    In the paper ``The Effects of Thermal Energetics on Three-dimensional Hydrodynamic Instabilities in Massive Protostellar Disks. II. High-Resolution and Adiabatic Evolutions'' by Brian K. Pickett, Patrick Cassen, Richard H. Durisen, and Robert Link (ApJ, 529, 1034 [2000]), the wrong version of Figure 10 was published as a result of an error at the Press. The correct version of Figure 10 appears below. The Press sincerely regrets this error.

  14. Medical error identification, disclosure, and reporting: do emergency medicine provider groups differ?

    PubMed

    Hobgood, Cherri; Weiner, Bryan; Tamayo-Sarver, Joshua H

    2006-04-01

    To determine if the three types of emergency medicine providers--physicians, nurses, and out-of-hospital providers (emergency medical technicians [EMTs])--differ in their identification, disclosure, and reporting of medical error. A convenience sample of providers in an academic emergency department evaluated ten case vignettes that represented two error types (medication and cognitive) and three severity levels. For each vignette, providers were asked the following: 1) Is this an error? 2) Would you tell the patient? 3) Would you report this to a hospital committee? To assess differences in identification, disclosure, and reporting by provider type, error type, and error severity, the authors constructed three-way tables with the nonparametric Somers' D clustered on participant. To assess the contribution of disclosure instruction and environmental variables, fixed-effects regression stratified by provider type was used. Of the 116 providers who were eligible, 103 (40 physicians, 26 nurses, and 35 EMTs) had complete data. Physicians were more likely to classify an event as an error (78%) than nurses (71%; p = 0.04) or EMTs (68%; p < 0.01). Nurses were less likely to disclose an error to the patient (59%) than physicians (71%; p = 0.04). Physicians were the least likely to report the error (54%) compared with nurses (68%; p = 0.02) or EMTs (78%; p < 0.01). For all provider and error types, identification, disclosure, and reporting increased with increasing severity. Improving patient safety hinges on the ability of health care providers to accurately identify, disclose, and report medical errors. Interventions must account for differences in error identification, disclosure, and reporting by provider type.

  15. Evaluation of kinetic uncertainty in numerical models of petroleum generation

    USGS Publications Warehouse

    Peters, K.E.; Walters, C.C.; Mankiewicz, P.J.

    2006-01-01

    Oil-prone marine petroleum source rocks contain type I or type II kerogen having Rock-Eval pyrolysis hydrogen indices greater than 600 or 300-600 mg hydrocarbon/g total organic carbon (HI, mg HC/g TOC), respectively. Samples from 29 marine source rocks worldwide that contain mainly type II kerogen (HI = 230-786 mg HC/g TOC) were subjected to open-system programmed pyrolysis to determine the activation energy distributions for petroleum generation. Assuming a burial heating rate of 1??C/m.y. for each measured activation energy distribution, the calculated average temperature for 50% fractional conversion of the kerogen in the samples to petroleum is approximately 136 ?? 7??C, but the range spans about 30??C (???121-151??C). Fifty-two outcrop samples of thermally immature Jurassic Oxford Clay Formation were collected from five locations in the United Kingdom to determine the variations of kinetic response for one source rock unit. The samples contain mainly type I or type II kerogens (HI = 230-774 mg HC/g TOC). At a heating rate of 1??C/m.y., the calculated temperatures for 50% fractional conversion of the Oxford Clay kerogens to petroleum differ by as much as 23??C (127-150??C). The data indicate that kerogen type, as defined by hydrogen index, is not systematically linked to kinetic response, and that default kinetics for the thermal decomposition of type I or type II kerogen can introduce unacceptable errors into numerical simulations. Furthermore, custom kinetics based on one or a few samples may be inadequate to account for variations in organofacies within a source rock. We propose three methods to evaluate the uncertainty contributed by kerogen kinetics to numerical simulations: (1) use the average kinetic distribution for multiple samples of source rock and the standard deviation for each activation energy in that distribution; (2) use source rock kinetics determined at several locations to describe different parts of the study area; and (3) use a weighted-average method that combines kinetics for samples from different locations in the source rock unit by giving the activation energy distribution for each sample a weight proportional to its Rock-Eval pyrolysis S2 yield (hydrocarbons generated by pyrolytic degradation of organic matter). Copyright ?? 2006. The American Association of Petroleum Geologists. All rights reserved.

  16. Consideration of species community composition in statistical ...

    EPA Pesticide Factsheets

    Diseases are increasing in marine ecosystems, and these increases have been attributed to a number of environmental factors including climate change, pollution, and overfishing. However, many studies pool disease prevalence into taxonomic groups, disregarding host species composition when comparing sites or assessing environmental impacts on patterns of disease presence. We used simulated data under a known environmental effect to assess the ability of standard statistical methods (binomial and linear regression, ANOVA) to detect a significant environmental effect on pooled disease prevalence with varying species abundance distributions and relative susceptibilities to disease. When one species was more susceptible to a disease and both species only partially overlapped in their distributions, models tended to produce a greater number of false positives (Type I error). Differences in disease risk between regions or along an environmental gradient tended to be underestimated, or even in the wrong direction, when highly susceptible taxa had reduced abundances in impacted sites, a situation likely to be common in nature. Including relative abundance as an additional variable in regressions improved model accuracy, but tended to be conservative, producing more false negatives (Type II error) when species abundance was strongly correlated with the environmental effect. Investigators should be cautious of underlying assumptions of species similarity in susceptib

  17. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... errors or omissions that occurred before the publication of these regulations. Any reasonable method used... February 24, 1994, will be considered proper, provided that the method is consistent with the rules of...

  18. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng; Aluyor, E.; Audu, T.

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used tomore » predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.« less

  19. Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica

    NASA Astrophysics Data System (ADS)

    Oboh, I.; Aluyor, E.; Audu, T.

    2015-03-01

    The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R2), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.

  20. Quantum biological channel modeling and capacity calculation.

    PubMed

    Djordjevic, Ivan B

    2012-12-10

    Quantum mechanics has an important role in photosynthesis, magnetoreception, and evolution. There were many attempts in an effort to explain the structure of genetic code and transfer of information from DNA to protein by using the concepts of quantum mechanics. The existing biological quantum channel models are not sufficiently general to incorporate all relevant contributions responsible for imperfect protein synthesis. Moreover, the problem of determination of quantum biological channel capacity is still an open problem. To solve these problems, we construct the operator-sum representation of biological channel based on codon basekets (basis vectors), and determine the quantum channel model suitable for study of the quantum biological channel capacity and beyond. The transcription process, DNA point mutations, insertions, deletions, and translation are interpreted as the quantum noise processes. The various types of quantum errors are classified into several broad categories: (i) storage errors that occur in DNA itself as it represents an imperfect storage of genetic information, (ii) replication errors introduced during DNA replication process, (iii) transcription errors introduced during DNA to mRNA transcription, and (iv) translation errors introduced during the translation process. By using this model, we determine the biological quantum channel capacity and compare it against corresponding classical biological channel capacity. We demonstrate that the quantum biological channel capacity is higher than the classical one, for a coherent quantum channel model, suggesting that quantum effects have an important role in biological systems. The proposed model is of crucial importance towards future study of quantum DNA error correction, developing quantum mechanical model of aging, developing the quantum mechanical models for tumors/cancer, and study of intracellular dynamics in general.

  1. Test-retest reliability and minimal detectable change of the Beck Depression Inventory and the Taiwan Geriatric Depression Scale in patients with Parkinson's disease

    PubMed Central

    Huang, Sheau-Ling; Hsieh, Ching-Lin; Wu, Ruey-Meei

    2017-01-01

    Background The Beck Depression Inventory II (BDI-II) and the Taiwan Geriatric Depression Scale (TGDS) are self-report scales used for assessing depression in patients with Parkinson’s disease (PD) and geriatric people. The minimal detectable change (MDC) represents the least amount of change that indicates real difference (i.e., beyond random measurement error) for a single subject. Our aim was to investigate the test-retest reliability and MDC of the BDI-II and the TGDS in people with PD. Methods Seventy patients were recruited from special clinics for movement disorders at a medical center. The patients’ mean age was 67.7 years, and 63.0% of the patients were male. All patients were assessed with the BDI-II and the TGDS twice, 2 weeks apart. We used the intraclass correlation coefficient (ICC) to determine the reliability between test and retest. We calculated the MDC based on standard error of measurement. The MDC% was calculated (i.e., by dividing the MDC by the possible maximal score of the measure). Results The test-retest reliabilities of the BDI-II/TGDS were high (ICC = 0.86/0.89). The MDCs (MDC%s) of the BDI-II and TGDS were 8.7 (13.8%) and 5.4 points (18.0%), respectively. Both measures had acceptable to nearly excellent random measurement errors. Conclusions The test-retest reliabilities of the BDI-II and the TGDS are high. The MDCs of both measures are acceptable to nearly excellent in people with PD. These findings imply that the BDI-II and the TGDS are suitable for use in a research context and in clinical settings to detect real change in a single subject. PMID:28945776

  2. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  3. Lack of evidence for allelic association between personality traits and the dopamine D4 receptor gene polymorphisms.

    PubMed

    Jönsson, E G; Nöthen, M M; Gustavsson, J P; Neidt, H; Brené, S; Tylec, A; Propping, P; Sedvall, G C

    1997-05-01

    Personality traits in human subjects have shown considerable heritable components. Recently, two research groups reported associations between dopamine D4 receptor genotypes and the personality trait known as novelty seeking. This study was an attempt to replicate these findings. Three different exonic dopamine D4 receptor polymorphisms were genotyped in 126 healthy Swedish subjects. Personality traits of the subjects were assessed with the Karolinska Scales of Personality. Although there was a tendency in the direction hypothesized, no significant association between genotype constellations and personality traits was found. The previously reported association between dopamine D4 receptor alleles and novelty seeking was not replicated. Possible reasons for this include differences in personality inventories, ethnicity, and type I or type II errors.

  4. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  5. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  6. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  7. Near-IR period-luminosity relations for pulsating stars in ω Centauri (NGC 5139)

    NASA Astrophysics Data System (ADS)

    Navarrete, C.; Catelan, M.; Contreras Ramos, R.; Alonso-García, J.; Gran, F.; Dékány, I.; Minniti, D.

    2017-08-01

    Aims: The globular cluster ω Centauri (NGC 5139) hosts hundreds of pulsating variable stars of different types, thus representing a treasure trove for studies of their corresponding period-luminosity (PL) relations. Our goal in this study is to obtain the PL relations for RR Lyrae and SX Phoenicis stars in the field of the cluster, based on high-quality, well-sampled light curves in the near-infrared (IR). Methods: Observations were carried out using the VISTA InfraRed CAMera (VIRCAM) mounted on the Visible and Infrared Survey Telescope for Astronomy (VISTA). A total of 42 epochs in J and 100 epochs in KS were obtained, spanning 352 days. Point-spread function photometry was performed using DoPhot and DAOPHOT crowded-field photometry packages in the outer and inner regions of the cluster, respectively. Results: Based on the comprehensive catalog of near-IR light curves thus secured, PL relations were obtained for the different types of pulsators in the cluster, both in the J and KS bands. This includes the first PL relations in the near-IR for fundamental-mode SX Phoenicis stars. The near-IR magnitudes and periods of Type II Cepheids and RR Lyrae stars were used to derive an updated true distance modulus to the cluster, with a resulting value of (m - M)0 = 13.708 ± 0.035 ± 0.10 mag, where the error bars correspond to the adopted statistical and systematic errors, respectively. Adding the errors in quadrature, this is equivalent to a heliocentric distance of 5.52 ± 0.27 kpc. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, with the VISTA telescope (project ID 087.D-0472, PI R. Angeloni).

  8. The initial masses of the red supergiant progenitors to Type II supernovae

    NASA Astrophysics Data System (ADS)

    Davies, Ben; Beasor, Emma R.

    2018-02-01

    There are a growing number of nearby supernovae (SNe) for which the progenitor star is detected in archival pre-explosion imaging. From these images it is possible to measure the progenitor's brightness a few years before explosion, and ultimately estimate its initial mass. Previous work has shown that II-P and II-L SNe have red supergiant (RSG) progenitors, and that the range of initial masses for these progenitors seems to be limited to ≲ 17 M⊙. This is in contrast with the cut-off of 25-30 M⊙ predicted by evolutionary models, a result that is termed the `red supergiant problem'. Here we investigate one particular source of systematic error present in converting pre-explosion photometry into an initial mass, which of the bolometric correction (BC) used to convert a single-band flux into a bolometric luminosity. We show, using star clusters, that RSGs evolve to later spectral types as they approach SN, which in turn causes the BC to become larger. Failure to account for this results in a systematic underestimate of a star's luminosity, and hence its initial mass. Using our empirically motivated BCs we reappraise the II-P and II-L SNe that have their progenitors detected in pre-explosion imaging. Fitting an initial mass function to these updated masses results in an increased upper mass cut-off of Mhi = 19.0^{+2.5}_{-1.3} M⊙, with a 95 per cent upper confidence limit of <27 M⊙. Accounting for finite sample size effects and systematic uncertainties in the mass-luminosity relationship raises the cut-off to Mhi = 25 M⊙ (<33 M⊙, 95 per cent confidence). We therefore conclude that there is currently no strong evidence for `missing' high-mass progenitors to core-collapse SNe.

  9. A simple and effective figure caption detection system for old-style documents

    NASA Astrophysics Data System (ADS)

    Liu, Zongyi; Zhou, Hanning

    2011-01-01

    Identifying figure captions has wide applications in producing high quality e-books such as kindle books or ipad books. In this paper, we present a rule-based system to detect horizontal figure captions in old-style documents. Our algorithm consists of three steps: (i) segment images into regions of different types such as text and figures, (ii) search the best caption region candidate based on heuristic rules such as region alignments and distances, and (iii) expand caption regions identified in step (ii) with its neighboring text-regions in order to correct oversegmentation errors. We test our algorithm using 81 images collected from old-style books, with each image containing at least one figure area. We show that the approach is able to correctly detect figure captions from images with different layouts, and we also measure its performances in terms of both precision rate and recall rate.

  10. Analysis of error type and frequency in apraxia of speech among Portuguese speakers.

    PubMed

    Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo

    2010-01-01

    Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  11. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Selecting a separable parametric spatiotemporal covariance structure for longitudinal imaging data.

    PubMed

    George, Brandon; Aban, Inmaculada

    2015-01-15

    Longitudinal imaging studies allow great insight into how the structure and function of a subject's internal anatomy changes over time. Unfortunately, the analysis of longitudinal imaging data is complicated by inherent spatial and temporal correlation: the temporal from the repeated measures and the spatial from the outcomes of interest being observed at multiple points in a patient's body. We propose the use of a linear model with a separable parametric spatiotemporal error structure for the analysis of repeated imaging data. The model makes use of spatial (exponential, spherical, and Matérn) and temporal (compound symmetric, autoregressive-1, Toeplitz, and unstructured) parametric correlation functions. A simulation study, inspired by a longitudinal cardiac imaging study on mitral regurgitation patients, compared different information criteria for selecting a particular separable parametric spatiotemporal correlation structure as well as the effects on types I and II error rates for inference on fixed effects when the specified model is incorrect. Information criteria were found to be highly accurate at choosing between separable parametric spatiotemporal correlation structures. Misspecification of the covariance structure was found to have the ability to inflate the type I error or have an overly conservative test size, which corresponded to decreased power. An example with clinical data is given illustrating how the covariance structure procedure can be performed in practice, as well as how covariance structure choice can change inferences about fixed effects. Copyright © 2014 John Wiley & Sons, Ltd.

  13. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  14. Reducing visual deficits caused by refractive errors in school and preschool children: results of a pilot school program in the Andean region of Apurimac, Peru.

    PubMed

    Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Angel; Kohler, Johannes

    2014-01-01

    Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design : A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and ≤ 6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. A total sample of 364 children aged 3-11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research.

  15. Is neonatal neurological damage in the delivery room avoidable? Experience of 33 levels I and II maternity units of a French perinatal network.

    PubMed

    Dupuis, O; Dupont, C; Gaucherand, P; Rudigoz, R-C; Fernandez, M P; Peigne, E; Labaune, J M

    2007-09-01

    To determine the frequency of avoidable neonatal neurological damage. We carried out a retrospective study from January 1st to December 31st 2003, including all children transferred from a level I or II maternity unit for suspected neurological damage (SND). Only cases confirmed by a persistent abnormality on clinical examination, EEG, transfontanelle ultrasound scan, CT scan or cerebral MRI were retained. Each case was studied in detail by an expert committee and classified as "avoidable", "unavoidable" or "of indeterminate avoidability." The management of "avoidable" cases was analysed to identify potentially avoidable factors (PAFs): not taking into account a major risk factor (PAF1), diagnostic errors (PAF2), suboptimal decision to delivery interval (PAF3) and mechanical complications (PAF4). In total, 77 children were transferred for SND; two cases were excluded (inaccessible medical files). Forty of the 75 cases of SND included were confirmed: 29 were "avoidable", 8 were "unavoidable" and 3 were "of indeterminate avoidability". Analysis of the 29 avoidable cases identified 39 PAFs: 18 PAF1, 5 PAF2, 10 PAF3 and 6 PAF4. Five had no classifiable PAF (0 death), 11 children had one type of PAF (one death), 11 children had two types of PAF (3 deaths), 2 had three types of PAF (2 deaths). Three quarters of the confirmed cases of neurological damage occurring in levels I and II maternity units of the Aurore network in 2003 were avoidable. Five out of six cases resulting in early death involved several potentially avoidable factors.

  16. Design of the Detector II: A CMOS Gate Array for the Study of Concurrent Error Detection Techniques.

    DTIC Science & Technology

    1987-07-01

    detection schemes and temporary failures. The circuit consists- or of six different adders with concurrent error detection schemes . The error detection... schemes are - simple duplication, duplication with functional dual implementation, duplication with different &I [] .6implementations, two-rail encoding...THE SYSTEM. .. .... ...... ...... ...... 5 7. DESIGN OF CED SCHEMES .. ... ...... ...... ........ 7 7.1 Simple Duplication

  17. Types of diagnostic errors in neurological emergencies in the emergency department.

    PubMed

    Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V

    2015-02-01

    Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.

  18. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  19. A comparison of 1D analytical model and 3D finite element analysis with experiments for a rosen-type piezoelectric transformer.

    PubMed

    Boukazouha, F; Poulin-Vittrant, G; Tran-Huu-Hue, L P; Bavencoffe, M; Boubenider, F; Rguiti, M; Lethiecq, M

    2015-07-01

    This article is dedicated to the study of Piezoelectric Transformers (PTs), which offer promising solutions to the increasing need for integrated power electronics modules within autonomous systems. The advantages offered by such transformers include: immunity to electromagnetic disturbances; ease of miniaturisation for example, using conventional micro fabrication processes; and enhanced performance in terms of voltage gain and power efficiency. Central to the adequate description of such transformers is the need for complex analytical modeling tools, especially if one is attempting to include combined contributions due to (i) mechanical phenomena owing to the different propagation modes which differ at the primary and secondary sides of the PT; and (ii) electrical phenomena such as the voltage gain and power efficiency, which depend on the electrical load. The present work demonstrates an original one-dimensional (1D) analytical model, dedicated to a Rosen-type PT and simulation results are successively compared against that of a three-dimensional (3D) Finite Element Analysis (COMSOL Multiphysics software) and experimental results. The Rosen-type PT studied here is based on a single layer soft PZT (P191) with corresponding dimensions 18 mm × 3 mm × 1.5 mm, which operated at the second harmonic of 176 kHz. Detailed simulational and experimental results show that the presented 1D model predicts experimental measurements to within less than 10% error of the voltage gain at the second and third resonance frequency modes. Adjustment of the analytical model parameters is found to decrease errors relative to experimental voltage gain to within 1%, whilst a 2.5% error on the output admittance magnitude at the second resonance mode were obtained. Relying on the unique assumption of one-dimensionality, the present analytical model appears as a useful tool for Rosen-type PT design and behavior understanding. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Comparison of slope and height profiles for flat synchrotron x-ray mirrors measured with a long trace profiler and a Fizeau interferometer.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, J.; Assoufid, L.; Macrander, A.

    2007-01-01

    Long trace profilers (LTPS) have been used at many synchrotron radiation laboratories worldwide for over a decade to measure surface slope profiles of long grazing incidence x-ray mirrors. Phase measuring interferometers (PMIs) of the Fizeau type, on the other hand, are being used by most mirror manufacturers to accomplish the same task. However, large mirrors whose dimensions exceed the aperture of the Fizeau interferometer require measurements to be carried out at grazing incidence, and aspheric optics require the use of a null lens. While an LTP provides a direct measurement of ID slope profiles, PMIs measure area height profiles frommore » which the slope can be obtained by a differentiation algorithm. Measurements of the two types of instruments have been found by us to be in good agreement, but to our knowledge there is no published work directly comparing the two instruments. This paper documents that comparison. We measured two different nominally flat mirrors with both the LTP in operation at the Advanced Photon Source (a type-II LTP) and a Fizeau-type PMI interferometer (Wyko model 6000). One mirror was 500 mm long and made of Zerodur, and the other mirror was 350 mm long and made of silicon. Slope error results with these instruments agree within nearly 100% (3.11 {+-} 0.15 {micro}rad for the LTP, and 3.11 {+-} 0.02 {micro}rad for the Fizeau PMI interferometer) for the medium quality Zerodur mirror with 3 {micro}rad rms nominal slope error. A significant difference was observed with the much higher quality silicon mirror. For the Si mirror, slope error data is 0.39 {+-} 0.08 {micro}rad from LTP measurements but it is 0.35 {+-} 0.01 {micro}rad from PMI interferometer measurements. The standard deviations show that the Fizeau PMI interferometer has much better measurement repeatability.« less

  1. Correction of Quenching Errors in Analytical Fluorimetry through Use of Time Resolution.

    DTIC Science & Technology

    1980-05-27

    QUENCHING ERRORS IN ANALYTICAL FLUORIMETRY THROUGH USE OF TIME RESOLUTION by Gary M. Hieftje and Gilbert R. Haugen Prepared for Publication in... HIEFTJE , 6 R HAUGEN NOCOIT1-6-0638 UCLASSIFIED TR-25 NL ///I//II IIIII I__I. 111122 Z .. ..12 1.~l8 .2 -4 SECuRITY CLSIIAI1 orTI PAGE MWhno. ee...in Analytical and Clinical Chemistry, vol. 3, D. M. Hercules, G. M. Hieftje , L. R. Snyder, and M4. A. Evenson, eds., Plenum Press, N.Y., 1978, ch. S

  2. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive

    PubMed Central

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Background Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Methods Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Results Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. Conclusions There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the ‘3 or 3 rule’). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores. PMID:23983828

  3. Exploring the Phenotype of Phonological Reading Disability as a Function of the Phonological Deficit Severity: Evidence from the Error Analysis Paradigm in Arabic

    ERIC Educational Resources Information Center

    Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid

    2014-01-01

    The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…

  4. Constraining the variation of the fine-structure constant with observations of narrow quasar absorption lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Songaila, A.; Cowie, L. L., E-mail: acowie@ifa.hawaii.edu

    2014-10-01

    The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure inmore » even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of unambiguously detecting variation in α using the MM method.« less

  5. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  6. Strategic planning to reduce medical errors: Part I--diagnosis.

    PubMed

    Waldman, J Deane; Smith, Howard L

    2012-01-01

    Despite extensive dialogue and a continuing stream of proposed medical practice revisions, medical errors and adverse impacts persist. Connectivity of vital elements is often underestimated or not fully understood. This paper analyzes medical errors from a systems dynamics viewpoint (Part I). Our analysis suggests in Part II that the most fruitful strategies for dissolving medical errors include facilitating physician learning, educating patients about appropriate expectations surrounding treatment regimens, and creating "systematic" patient protections rather than depending on (nonexistent) perfect providers.

  7. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  8. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  9. Currency crisis indication by using ensembles of support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee

    2014-07-01

    There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.

  10. The interval testing procedure: A general framework for inference in functional data analysis.

    PubMed

    Pini, Alessia; Vantini, Simone

    2016-09-01

    We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.

  11. Chronic shin splints. Classification and management of medial tibial stress syndrome.

    PubMed

    Detmer, D E

    1986-01-01

    A clinical classification and treatment programme has been developed for chronic medial tibial stress syndrome. Medial tibial stress syndrome has been reported to be either tibial stress fracture or microfracture, tibial periostitis, or distal deep posterior chronic compartment syndrome. Three chronic types exist and may coexist: Type I (tibial microfracture, bone stress reaction or cortical fracture); type II (periostalgia from chronic avulsion of the periosteum at the periosteal-fascial junction); and type III (chronic compartment syndrome syndrome). Type I disease is treated nonoperatively. Operations for resistant types II and III medial tibial stress syndrome were performed in 41 patients. Bilaterality was common (type II, 50% type III, 88%). Seven had coexistent type II/III; one had type I/II. Preoperative symptoms averaged 24 months in type II, 6 months in type III, and 33 months in types II/III. Mean age was 22 years (15 to 51). Resting compartment pressures were normal in type II (mean 12 mm Hg) and elevated in type III and type II/III (mean 23 mm Hg). Type II and type II/III patients received fasciotomy plus periosteal cauterisation. Type III patients had fasciotomy only. All procedures were performed on an outpatient basis using local anaesthesia. Follow up was complete and averaged 6 months (2 to 14 months). Improved performance was as follows: type II, 93%, type III, 100%; type II/III, 86%. Complete cures were as follows: type II, 78%; type III, 75%; and type II/III, 57%. This experience suggests that with precise diagnosis and treatment involving minimal risk and cost the athlete has a reasonable chance of return to full activity.

  12. Patient-centered clinical trials.

    PubMed

    Chaudhuri, Shomesh E; Ho, Martin P; Irony, Telba; Sheldon, Murray; Lo, Andrew W

    2018-02-01

    We apply Bayesian decision analysis (BDA) to incorporate patient preferences in the regulatory approval process for new therapies. By assigning weights to type I and type II errors based on patient preferences, the significance level (α) and power (1-β) of a randomized clinical trial (RCT) for a new therapy can be optimized to maximize the value to current and future patients and, consequently, to public health. We find that for weight-loss devices, potentially effective low-risk treatments have optimal αs larger than the traditional one-sided significance level of 5%, whereas potentially less effective and riskier treatments have optimal αs below 5%. Moreover, the optimal RCT design, including trial size, varies with the risk aversion and time-to-access preferences and the medical need of the target population. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. The problem of measurement model misspecification in behavioral and organizational research and some recommended solutions.

    PubMed

    MacKenzie, Scott B; Podsakoff, Philip M; Jarvis, Cheryl Burke

    2005-07-01

    The purpose of this study was to review the distinction between formative- and reflective-indicator measurement models, articulate a set of criteria for deciding whether measures are formative or reflective, illustrate some commonly researched constructs that have formative indicators, empirically test the effects of measurement model misspecification using a Monte Carlo simulation, and recommend new scale development procedures for latent constructs with formative indicators. Results of the Monte Carlo simulation indicated that measurement model misspecification can inflate unstandardized structural parameter estimates by as much as 400% or deflate them by as much as 80% and lead to Type I or Type II errors of inference, depending on whether the exogenous or the endogenous latent construct is misspecified. Implications of this research are discussed. Copyright 2005 APA, all rights reserved.

  14. Use of genetically engineered swine to elucidate testis function in the boar

    USDA-ARS?s Scientific Manuscript database

    The second mammalian GnRH isoform (GnRH-II) and its specific receptor (GnRHR-II) are abundant within the testis, suggesting a critical role. Gene coding errors prevent their production in many species, but both genes are functional in swine. We have demonstrated that GnRHR-II localizes to porcine Le...

  15. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...

  16. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...

  17. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...

  18. 26 CFR 1.42-13 - Rules necessary and appropriate; housing credit agencies' correction of administrative errors and...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... this paragraph (b)(2) include the following— (i) A mathematical error; (ii) An entry on a document that... intended to form Partnership Y to finance the project. After receiving the reservation letter and prior to...

  19. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1982-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.

  20. Correlation and agreement between eplet mismatches calculated using serological, low-intermediate and high resolution molecular human leukocyte antigen typing methods

    PubMed Central

    Fidler, Samantha; D’Orsogna, Lloyd; Irish, Ashley B.; Lewis, Joshua R.; Wong, Germaine; Lim, Wai H.

    2018-01-01

    Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960–0.975) and 0.926 (95% CI 0.899–0.944), respectively; and 0.995 (95% CI 0.994–0.996) and 0.993 (95% CI 0.991–0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation. PMID:29568344

  1. Error Analysis of Indonesian Junior High School Student in Solving Space and Shape Content PISA Problem Using Newman Procedure

    NASA Astrophysics Data System (ADS)

    Sumule, U.; Amin, S. M.; Fuad, Y.

    2018-01-01

    This study aims to determine the types and causes of errors, as well as efforts being attempted to overcome the mistakes made by junior high school students in completing PISA content space and shape. Two subjects were selected based on the mathematical ability test results with the most error, yet they are able to communicate orally and in writing. Two selected subjects then worked on the PISA ability test question and the subjects were interviewed to find out the type and cause of the error and then given a scaffolding based on the type of mistake made.The results of this study obtained the type of error that students do are comprehension and transformation error. The reasons are students was not able to identify the keywords in the question, write down what is known or given, specify formulas or device a plan. To overcome this error, students were given scaffolding. Scaffolding that given to overcome misunderstandings were reviewing and restructuring. While to overcome the transformation error, scaffolding given were reviewing, restructuring, explaining and developing representational tools. Teachers are advised to use scaffolding to resolve errors so that the students are able to avoid these errors.

  2. Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.

    PubMed

    Wu, Xue; Wu, Changxu

    2017-01-01

    This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.

  3. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    PubMed

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  4. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  5. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  6. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  7. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  8. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  9. Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction

    NASA Technical Reports Server (NTRS)

    Godbey, T. W.; Lambert, R.; Milano, G.

    1972-01-01

    A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.

  10. Subtypes of the Type II Pit Pattern Reflect Distinct Molecular Subclasses in the Serrated Neoplastic Pathway.

    PubMed

    Aoki, Hironori; Yamamoto, Eiichiro; Yamano, Hiro-O; Sugai, Tamotsu; Kimura, Tomoaki; Tanaka, Yoshihito; Matsushita, Hiro-O; Yoshikawa, Kenjiro; Takagi, Ryo; Harada, Eiji; Nakaoka, Michiko; Yoshida, Yuko; Harada, Taku; Sudo, Gota; Eizuka, Makoto; Yorozu, Akira; Kitajima, Hiroshi; Niinuma, Takeshi; Kai, Masahiro; Nojima, Masanori; Suzuki, Hiromu; Nakase, Hiroshi

    2018-03-15

    Colorectal serrated lesions (SLs) are important premalignant lesions whose clinical and biological features are not fully understood. We aimed to establish accurate colonoscopic diagnosis and treatment of SLs through evaluation of associations among the morphological, pathological, and molecular characteristics of SLs. A total of 388 premalignant and 18 malignant colorectal lesions were studied. Using magnifying colonoscopy, microsurface structures were assessed based on Kudo's pit pattern classification system, and the Type II pit pattern was subcategorized into classical Type II, Type II-Open (Type II-O) and Type II-Long (Type II-L). BRAF/KRAS mutations and DNA methylation of CpG island methylator phenotype (CIMP) markers (MINT1, - 2, - 12, - 31, p16, and MLH1) were analyzed through pyrosequencing. Type II-O was tightly associated with sessile serrated adenoma/polyps (SSA/Ps) with BRAF mutation and CIMP-high. Most lesions with simple Type II or Type II-L were hyperplastic polyps, while mixtures of Type II or Type II-L plus more advanced pit patterns (III/IV) were characteristic of traditional serrated adenomas (TSAs). Type II-positive TSAs frequently exhibited BRAF mutation and CIMP-low, while Type II-L-positive TSAs were tightly associated with KRAS mutation and CIMP-low. Analysis of lesions containing both premalignant and cancerous components suggested Type II-L-positive TSAs may develop into KRAS-mutated/CIMP-low/microsatellite stable cancers, while Type II-O-positive SSA/Ps develop into BRAF-mutated/CIMP-high/microsatellite unstable cancers. These results suggest that Type II subtypes reflect distinct molecular subclasses in the serrated neoplasia pathway and that they could be useful hallmarks for identifying SLs at high risk of developing into CRC.

  11. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  12. Pre-cue Fronto-Occipital Alpha Phase and Distributed Cortical Oscillations Predict Failures of Cognitive Control

    PubMed Central

    Hamm, Jordan P.; Dyckman, Kara A.; McDowell, Jennifer E.; Clementz, Brett A.

    2012-01-01

    Cognitive control is required for correct performance on antisaccade tasks, including the ability to inhibit an externally driven ocular motor repsonse (a saccade to a peripheral stimulus) in favor of an internally driven ocular motor goal (a saccade directed away from a peripheral stimulus). Healthy humans occasionally produce errors during antisaccade tasks, but the mechanisms associated with such failures of cognitive control are uncertain. Most research on cognitive control failures focuses on post-stimulus processing, although a growing body of literature highlights a role of intrinsic brain activity in perceptual and cognitive performance. The current investigation used dense array electroencephalography and distributed source analyses to examine brain oscillations across a wide frequency bandwidth in the period prior to antisaccade cue onset. Results highlight four important aspects of ongoing and preparatory brain activations that differentiate error from correct antisaccade trials: (i) ongoing oscillatory beta (20–30Hz) power in anterior cingulate prior to trial initiation (lower for error trials), (ii) instantaneous phase of ongoing alpha-theta (7Hz) in frontal and occipital cortices immediately before trial initiation (opposite between trial types), (iii) gamma power (35–60Hz) in posterior parietal cortex 100 ms prior to cue onset (greater for error trials), and (iv) phase locking of alpha (5–12Hz) in parietal and occipital cortices immediately prior to cue onset (lower for error trials). These findings extend recently reported effects of pre-trial alpha phase on perception to cognitive control processes, and help identify the cortical generators of such phase effects. PMID:22593071

  13. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  14. Didn't You Run the Spell Checker? Effects of Type of Spelling Error and Use of a Spell Checker on Perceptions of the Author

    ERIC Educational Resources Information Center

    Figueredo, Lauren; Varnhagen, Connie K.

    2005-01-01

    We investigated expectations regarding a writer's responsibility to proofread text for spelling errors when using a word processor. Undergraduate students read an essay and completed a questionnaire regarding their perceptions of the author and the quality of the essay. They then manipulated type of spelling error (no error, homophone error,…

  15. Land cover mapping of Greater Mesoamerica using MODIS data

    USGS Publications Warehouse

    Giri, Chandra; Jenkins, Clinton N.

    2005-01-01

    A new land cover database of Greater Mesoamerica has been prepared using moderate resolution imaging spectroradiometer (MODIS, 500 m resolution) satellite data. Daily surface reflectance MODIS data and a suite of ancillary data were used in preparing the database by employing a decision tree classification approach. The new land cover data are an improvement over traditional advanced very high resolution radiometer (AVHRR) based land cover data in terms of both spatial and thematic details. The dominant land cover type in Greater Mesoamerica is forest (39%), followed by shrubland (30%) and cropland (22%). Country analysis shows forest as the dominant land cover type in Belize (62%), Cost Rica (52%), Guatemala (53%), Honduras (56%), Nicaragua (53%), and Panama (48%), cropland as the dominant land cover type in El Salvador (60.5%), and shrubland as the dominant land cover type in Mexico (37%). A three-step approach was used to assess the quality of the classified land cover data: (i) qualitative assessment provided good insight in identifying and correcting gross errors; (ii) correlation analysis of MODIS- and Landsat-derived land cover data revealed strong positive association for forest (r2 = 0.88), shrubland (r2 = 0.75), and cropland (r2 = 0.97) but weak positive association for grassland (r2 = 0.26); and (iii) an error matrix generated using unseen training data provided an overall accuracy of 77.3% with a Kappa coefficient of 0.73608. Overall, MODIS 500 m data and the methodology used were found to be quite useful for broad-scale land cover mapping of Greater Mesoamerica.

  16. Factorial versus multi-arm multi-stage designs for clinical trials with multiple treatments.

    PubMed

    Jaki, Thomas; Vasileiou, Despina

    2017-02-20

    When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi-arm multi-stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi-arm multi-stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  18. Economic Value of Improved Accuracy for Self-Monitoring of Blood Glucose Devices for Type 1 and Type 2 Diabetes in England.

    PubMed

    McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D

    2018-04-01

    The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.

  19. Mimicry by asx- and ST-turns of the four main types of beta-turn in proteins.

    PubMed

    Duddy, William J; Nissink, J Willem M; Allen, Frank H; Milner-White, E James

    2004-11-01

    Hydrogen-bonded beta-turns in proteins occur in four categories: type I (the most common), type II, type II', and type I'. Asx-turns resemble beta-turns, in that both have an NH. . .OC hydrogen bond forming a ring of 10 atoms. Serine and threonine side chains also commonly form hydrogen-bonded turns, here called ST-turns. Asx-turns and ST-turns can be categorized into four classes, based on side chain rotamers and the conformation of the central turn residue, which are geometrically equivalent to the four types of beta-turns. We propose asx- and ST-turns be named using the type I, II, I', and II' beta-turn nomenclature. Using this, the frequency of occurrence of both asx- and ST-turns is: type II' > type I > type II > type I', whereas for beta-turns it is type I > type II > type I' > type II'. Almost all type II asx-turns occur as a recently described three residue feature named an asx-nest.

  20. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  2. Reducing visual deficits caused by refractive errors in school and preschool children: results of a pilot school program in the Andean region of Apurimac, Peru

    PubMed Central

    Latorre-Arteaga, Sergio; Gil-González, Diana; Enciso, Olga; Phelan, Aoife; García-Muñoz, Ángel; Kohler, Johannes

    2014-01-01

    Background Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research. PMID:24560253

  3. AQMEII3 evaluation of regional NA/EU simulations and analysis of scale, boundary conditions and emissions error-dependence

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  4. Evaluation and error apportionment of an ensemble of atmospheric chemistry transport modeling systems: multivariable temporal and spatial breakdown

    EPA Science Inventory

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) hel...

  5. Neuropsychological analysis of a typewriting disturbance following cerebral damage.

    PubMed

    Boyle, M; Canter, G J

    1987-01-01

    Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.

  6. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    NASA Astrophysics Data System (ADS)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  7. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    In this study, we assessed the explicit reporting of medical errors in the electronic record. We looked for cases in which the provider explicitly stated that he or she or another provider had committed an error. The advantage of the technique is that it is not limited to a specific type of error. Our goals were to 1) measure the rate at which medical errors were documented in medical records, and 2) characterize the types of errors that were reported.

  8. Opioid receptors regulate blocking and overexpectation of fear learning in conditioned suppression.

    PubMed

    Arico, Carolyn; McNally, Gavan P

    2014-04-01

    Endogenous opioids play an important role in prediction error during fear learning. However, the evidence for this role has been obtained almost exclusively using the species-specific defense response of freezing as the measure of learned fear. It is unknown whether opioid receptors regulate predictive fear learning when other measures of learned fear are used. Here, we used conditioned suppression as the measure of learned fear to assess the role of opioid receptors in fear learning. Experiment 1a studied associative blocking of fear learning. Rats in an experimental group received conditioned stimulus A (CSA) + training in Stage I and conditioned stimulus A and B (CSAB) + training in Stage II, whereas rats in a control group received only CSAB + training in Stage II. The prior fear conditioning of CSA blocked fear learning to conditioned stimulus B (CSB) in the experimental group. In Experiment 1b, naloxone (4 mg/kg) administered before Stage II prevented this blocking, thereby enabling normal fear learning to CSB. Experiment 2a studied overexpectation of fear. Rats received CSA + training and CSB + training in Stage I, and then rats in the experimental group received CSAB + training in Stage II whereas control rats did not. The Stage II compound training of CSAB reduced fear to CSA and CSB on test. In Experiment 2b, naloxone (4 mg/kg) administered before Stage II prevented this overexpectation. These results show that opioid receptors regulate Pavlovian fear learning, augmenting learning in response to positive prediction error and impairing learning in response to negative prediction error, when fear is assessed via conditioned suppression. These effects are identical to those observed when freezing is used as the measure of learned fear. These findings show that the role for opioid receptors in regulating fear learning extends across multiple measures of learned fear.

  9. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Network Dynamics Underlying Speed-Accuracy Trade-Offs in Response to Errors

    PubMed Central

    Agam, Yigal; Carey, Caitlin; Barton, Jason J. S.; Dyckman, Kara A.; Lee, Adrian K. C.; Vangel, Mark; Manoach, Dara S.

    2013-01-01

    The ability to dynamically and rapidly adjust task performance based on its outcome is fundamental to adaptive, flexible behavior. Over trials of a task, responses speed up until an error is committed and after the error responses slow down. These dynamic adjustments serve to optimize performance and are well-described by the speed-accuracy trade-off (SATO) function. We hypothesized that SATOs based on outcomes reflect reciprocal changes in the allocation of attention between the internal milieu and the task-at-hand, as indexed by reciprocal changes in activity between the default and dorsal attention brain networks. We tested this hypothesis using functional MRI to examine the pattern of network activation over a series of trials surrounding and including an error. We further hypothesized that these reciprocal changes in network activity are coordinated by the posterior cingulate cortex (PCC) and would rely on the structural integrity of its white matter connections. Using diffusion tensor imaging, we examined whether fractional anisotropy of the posterior cingulum bundle correlated with the magnitude of reciprocal changes in network activation around errors. As expected, reaction time (RT) in trials surrounding errors was consistent with predictions from the SATO function. Activation in the default network was: (i) inversely correlated with RT, (ii) greater on trials before than after an error and (iii) maximal at the error. In contrast, activation in the right intraparietal sulcus of the dorsal attention network was (i) positively correlated with RT and showed the opposite pattern: (ii) less activation before than after an error and (iii) the least activation on the error. Greater integrity of the posterior cingulum bundle was associated with greater reciprocity in network activation around errors. These findings suggest that dynamic changes in attention to the internal versus external milieu in response to errors underlie SATOs in RT and are mediated by the PCC. PMID:24069223

  11. A Computer-Aided Type-II Fuzzy Image Processing for Diagnosis of Meniscus Tear.

    PubMed

    Zarandi, M H Fazel; Khadangi, A; Karimi, F; Turksen, I B

    2016-12-01

    Meniscal tear is one of the prevalent knee disorders among young athletes and the aging population, and requires correct diagnosis and surgical intervention, if necessary. Not only the errors followed by human intervention but also the obstacles of manual meniscal tear detection highlight the need for automatic detection techniques. This paper presents a type-2 fuzzy expert system for meniscal tear diagnosis using PD magnetic resonance images (MRI). The scheme of the proposed type-2 fuzzy image processing model is composed of three distinct modules: Pre-processing, Segmentation, and Classification. λ-nhancement algorithm is used to perform the pre-processing step. For the segmentation step, first, Interval Type-2 Fuzzy C-Means (IT2FCM) is applied to the images, outputs of which are then employed by Interval Type-2 Possibilistic C-Means (IT2PCM) to perform post-processes. Second stage concludes with re-estimation of "η" value to enhance IT2PCM. Finally, a Perceptron neural network with two hidden layers is used for Classification stage. The results of the proposed type-2 expert system have been compared with a well-known segmentation algorithm, approving the superiority of the proposed system in meniscal tear recognition.

  12. ASCERTAINMENT OF ON-ROAD SAFETY ERRORS BASED ON VIDEO REVIEW

    PubMed Central

    Dawson, Jeffrey D.; Uc, Ergun Y.; Anderson, Steven W.; Dastrup, Elizabeth; Johnson, Amy M.; Rizzo, Matthew

    2011-01-01

    Summary Using an instrumented vehicle, we have studied several aspects of the on-road performance of healthy and diseased elderly drivers. One goal from such studies is to ascertain the type and frequency of driving safety errors. Because the judgment of such errors is somewhat subjective, we applied a taxonomy system of 15 general safety error categories and 76 specific safety error types. We also employed and trained professional driving instructors to review the video data of the on-road drives. In this report, we illustrate our rating system on a group of 111 drivers, ages 65 to 89. These drivers made errors in 13 of the 15 error categories, comprising 42 of the 76 error types. A mean (SD) of 35.8 (12.8) safety errors per drive were noted, with 2.1 (1.7) of them being judged as serious. Our methodology may be useful in applications such as intervention studies, and in longitudinal studies of changes in driving abilities in patients with declining cognitive ability. PMID:24273753

  13. Youth Attitude Tracking Study II Wave 17 -- Fall 1986.

    DTIC Science & Technology

    1987-06-01

    decision, unless so designated by other official documentation. TABLE OF CONTENTS Page PREFACE ................................................. xi...Segmentation Analyses .......................... 2-7 .3. METHODOLOGY OF YATS II....................................... 3-1 A. Sampling Design Overview...Sampling Design , Estimation Procedures and Estimated Sampling Errors ................................. A-i Appendix B: Data Collection Procedures

  14. 32 CFR 513.1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...

  15. 32 CFR 513.1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...

  16. 32 CFR 513.1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...

  17. 32 CFR 513.1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... news media organizations to the unit, installation, or command public affairs officer for response. (6... received from news media organizations. (ii) Coordinate with the SJA before making any response. (e) Policy... remain proof of indebtedness until— (i) Made good. (ii) Proven to be the error of the financial...

  18. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  19. Identifying Novice Student Programming Misconceptions and Errors from Summative Assessments

    ERIC Educational Resources Information Center

    Veerasamy, Ashok Kumar; D'Souza, Daryl; Laakso, Mikko-Jussi

    2016-01-01

    This article presents a study aimed at examining the novice student answers in an introductory programming final e-exam to identify misconceptions and types of errors. Our study used the Delphi concept inventory to identify student misconceptions and skill, rule, and knowledge-based errors approach to identify the types of errors made by novices…

  20. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  1. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  2. Derivation and precision of mean field electrodynamics with mesoscale fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2018-06-01

    Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.

  3. Effect of Thermal Maturation on n-alkanes and Kerogen in Preserved Organic Matter: Implications for Paleoenvironment Biomarkers

    NASA Astrophysics Data System (ADS)

    Craven, O. D.; Longbottom, T. L.; Hockaday, W. C.; Blackaby, E.

    2017-12-01

    Understanding the effects of maturity on biomarkers is vital in assessing biomarker reliability in mature sediments. It is well known for n-alkanes that increased maturity shortens chain lengths and decreases the odd over even preference however, the amount of change in these variables has not been determined for different maturities and types of preserved organic matter. For this reason, it is difficult to judge the trustworthiness of even lightly matured samples for paleoenvironment reconstruction. Another complication is the difficulty of accurately determining maturity as many maturity indicators are error-prone or not appropriate at low maturities. Using hydrous pyrolysis, we artificially matured black shale samples with type I (lacustrine) and type II (marine) kerogen to measure changes in n-alkane length and odd over even preference. Whole rock samples underwent hydrous pyrolysis for 72 hours, at 250 °C, 300 °C, 325 °C, 350 °C, and 375 °C to cover a wide maturity range. From the immature and artificially matured samples, the bitumen was extracted and the saturate fraction was separated using column chromatography. The saturate fraction was analyzed for n-alkanes using gas chromatography-mass spectroscopy. Kerogen structural changes were also measured using solid-state 13C NMR to relate changes in n-alkane biomarkers to changes in kerogen structure. Results show that for type I bitumen the n-alkanes did not change at low maturities considered premature in terms of oil generation (<325 °C). The NMR spectra of the type I kerogen support the lack of change, at low maturities no changes in the aliphatic portion (Fal) were observed, however, after 325 °C Fal decreased with increasing maturity. The loss of Fal indicates kerogen contributing hydrocarbons to bitumen that cause changes in n-alkane measurements. The type II kerogen's Fal also decreased with increasing maturity, but unlike the type I kerogen Fal loss started at low maturities. The differences between the matured type I and II organic matter indicate that organic matter type affects when n-alkane measurements change due to maturity. Additionally, the kerogen carbonyl functional group (FaC) decreases greatly from immature to low maturities, leveling off between 300 °C and 325 °C, allowing FaC to be a tool for determining low maturities.

  4. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  5. Independent oscillatory patterns determine performance fluctuations in children with attention deficit/hyperactivity disorder.

    PubMed

    Yordanova, Juliana; Albrecht, Björn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil

    2011-06-01

    The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests intrinsic oscillations at very low (<0.1 Hz) frequencies. In the present study, we analyse the time dynamics of performance accuracy to search for multisecond periodic fluctuations of error occurrence. Attentional lapses in attention deficit/hyperactivity disorder are proposed to originate from interferences from intrinsically oscillating networks. Identifying periodic error fluctuations with a frequency<0.1 Hz in patients with attention deficit/hyperactivity disorder would provide a behavioural evidence for such interferences. Performance was monitored during a visual flanker task in 92 children (7- to 16-year olds), 47 with attention deficit/hyperactivity disorder, combined type and 45 healthy controls. Using an original approach, the time distribution of error occurrence was analysed in the frequency and time-frequency domains in order to detect rhythmic periodicity. Major results demonstrate that in both patients and controls, error behaviour was characterized by multisecond rhythmic fluctuations with a period of ∼12 s, appearing with a delay after transition to task. Only in attention deficit/hyperactivity disorder, was there an additional 'pathological' oscillation of error generation, which determined periodic drops of performance accuracy each 20-30 s. Thus, in patients, periodic error fluctuations were modulated by two independent oscillatory patterns. The findings demonstrate that: (i) attentive behaviour of children is determined by multisecond regularities; and (ii) a unique additional periodicity guides performance fluctuations in patients. These observations may re-conceptualize the understanding of attentive behaviour beyond the executive top-down control and may reveal new origins of psychopathological behaviours in attention deficit/hyperactivity disorder.

  6. Opioid receptors mediate direct predictive fear learning: evidence from one-trial blocking.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2007-04-01

    Pavlovian fear learning depends on predictive error, so that fear learning occurs when the actual outcome of a conditioning trial exceeds the expected outcome. Previous research has shown that opioid receptors, including mu-opioid receptors in the ventrolateral quadrant of the midbrain periaqueductal gray (vlPAG), mediate such predictive fear learning. Four experiments reported here used a within-subject one-trial blocking design to study whether opioid receptors mediate a direct or indirect action of predictive error on Pavlovian association formation. In Stage I, rats were trained to fear conditioned stimulus (CS) A by pairing it with shock. In Stage II, CSA and CSB were co-presented once and co-terminated with shock. Two novel stimuli, CSC and CSD, were also co-presented once and co-terminated with shock in Stage II. The results showed one-trial blocking of fear learning (Experiment 1) as well as one-trial unblocking of fear learning when Stage II training employed a higher intensity footshock than was used in Stage I (Experiment 2). Systemic administrations of the opioid receptor antagonist naloxone (Experiment 3) or intra-vlPAG administrations of the selective mu-opioid receptor antagonist CTAP (Experiment 4) prior to Stage II training prevented one-trial blocking. These results show that opioid receptors mediate the direct actions of predictive error on Pavlovian association formation.

  7. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  8. The efficacy of three objective systems for identifying beef cuts that can be guaranteed tender.

    PubMed

    Wheeler, T L; Vote, D; Leheska, J M; Shackelford, S D; Belk, K E; Wulf, D M; Gwartney, B L; Koohmaraie, M

    2002-12-01

    The objective of this study was to determine the accuracy of three objective systems (prototype BeefCam, colorimeter, and slice shear force) for identifying guaranteed tender beef. In Phase I, 308 carcasses (105 Top Choice, 101 Low Choice, and 102 Select) from two commercial plants were tested. In Phase II, 400 carcasses (200 rolled USDA Select and 200 rolled USDA Choice) from one commercial plant were tested. The three systems were evaluated based on progressive certification of the longissimus as "tender" in 10% increments (the best 10, 20, 30%, etc., certified as "tender" by each technology; 100% certification would mean no sorting for tenderness). In Phase I, the error (percentage of carcasses certified as tender that had Warner-Bratzler shear force of > or = 5 kg at 14 d postmortem) for 100% certification using all carcasses was 14.1%. All certification levels up to 80% (slice shear force) and up to 70% (colorimeter) had less error (P < 0.05) than 100% certification. Errors in all levels of certification by prototype BeefCam (13.8 to 9.7%) were not different (P > 0.05) from 100% certification. In Phase I, the error for 100% certification for USDA Select carcasses was 30.7%. For Select carcasses, all slice shear force certification levels up to 60% (0 to 14.8%) had less error (P < 0.05) than 100% certification. For Select carcasses, errors in all levels of certification by colorimeter (20.0 to 29.6%) and by BeefCam (27.5 to 31.4%) were not different (P > 0.05) from 100% certification. In Phase II, the error for 100% certification for all carcasses was 9.3%. For all levels of slice shear force certification less than 90% (for all carcasses) or less than 80% (Select carcasses), errors in tenderness certification were less than (P < 0.05) for 100% certification. In Phase II, for all carcasses or Select carcasses, colorimeter and prototype BeefCam certifications did not significantly reduce errors (P > 0.05) compared to 100% certification. Thus, the direct measure of tenderness provided by slice shear force results in more accurate identification of "tender" beef carcasses than either of the indirect technologies, prototype BeefCam, or colorimeter, particularly for USDA Select carcasses. As tested in this study, slice shear force, but not the prototype BeefCam or colorimeter systems, accurately identified "tender" beef.

  9. Addressing Common Student Technical Errors in Field Data Collection: An Analysis of a Citizen-Science Monitoring Project.

    PubMed

    Philippoff, Joanna; Baumgartner, Erin

    2016-03-01

    The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.

  10. Likelihood-Based Random-Effect Meta-Analysis of Binary Events.

    PubMed

    Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D

    2015-01-01

    Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.

  11. Light curves of 213 Type Ia supernovae from the Essence survey

    DOE PAGES

    Narayan, G.; Rest, A.; Tucker, B. E.; ...

    2016-05-06

    The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less

  12. Light curves of 213 Type Ia supernovae from the Essence survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, G.; Rest, A.; Tucker, B. E.

    The ESSENCE survey discovered 213 Type Ia supernovae at redshiftsmore » $$0.1\\lt z\\lt 0.81$$ between 2002 and 2008. We present their R- and I-band photometry, measured from images obtained using the MOSAIC II camera at the CTIO Blanco, along with rapid-response spectroscopy for each object. We use our spectroscopic follow-up observations to determine an accurate, quantitative classification, and precise redshift. Through an extensive calibration program we have improved the precision of the CTIO Blanco natural photometric system. We use several empirical metrics to measure our internal photometric consistency and our absolute calibration of the survey. Here, we assess the effect of various potential sources of systematic bias on our measured fluxes, and estimate the dominant term in the systematic error budget from the photometric calibration on our absolute fluxes is ~1%.« less

  13. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  14. Item Discrimination and Type I Error in the Detection of Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Yanju; Brooks, Gordon P.; Johanson, George A.

    2012-01-01

    In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…

  15. 78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...

  16. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552

  17. Refractive errors in Aminu Kano Teaching Hospital, Kano Nigeria.

    PubMed

    Lawan, Abdu; Eme, Okpo

    2011-12-01

    The aim of the study is to retrospectively determine the pattern of refractive errors seen in the eye clinic of Aminu Kano Teaching Hospital, Kano-Nigeria from January to December, 2008. The clinic refraction register was used to retrieve the case folders of all patients refracted during the review period. Information extracted includes patient's age, sex, and types of refractive error. All patients had basic eye examination (to rule out other causes of subnormal vision) including intra ocular pressure measurement and streak retinoscopy at two third meter working distance. The final subjective refraction correction given to the patients was used to categorise the type of refractive error. Refractive errors was observed in 1584 patients and accounted for 26.9% of clinic attendance. There were more females than males (M: F=1.0: 1.2). The common types of refractive errors are presbyopia in 644 patients (40%), various types of astigmatism in 527 patients (33%), myopia in 216 patients (14%), hypermetropia in 171 patients (11%) and aphakia in 26 patients (2%). Refractive errors are common causes of presentation in the eye clinic. Identification and correction of refractive errors should be an integral part of eye care delivery.

  18. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  19. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    PubMed

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with

  20. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  1. Dermatoglyphics and Cheiloscopy as Key Tools in Resolving the Genetic Correlation of Inheritance Patterns in Cleft Lip and Palate Patients: An Assessment of 160 Patients.

    PubMed

    Singh, Priyankar; Nathani, Dipesh B

    2017-09-01

      The objective of this study was to correlate dermatoglyphics and cheiloscopy with genetic inheritance in cleft lip and cleft palate patients.   This was a case-control study to look for asymmetry in finger and lip print patterns. All of the participants were divided into two equal groups (40 mothers and 40 fathers in each group). The data were analyzed by three evaluators who were blind to the study to avoid any chances of error.   A sample of 160 sporadic participants were identified and evaluated. Group A was composed of 80 healthy parents not affected by cleft lip and cleft palate but had at least one child born with nonsyndromic cleft. Group B consisted of 80 healthy parents not affected by cleft lip and cleft palate and had healthy children without cleft lip and cleft palate.   Main outcome measures were marked dermatoglyphic asymmetry and specific lip print pattern in the study group.   We found marked asymmetry in various fingerprints and specific type II and type III lip print in the study group when compared with the control group. It was observed that groove count on the lip was significantly more frequent in study group parents.   Our study determined that there is a significant correlation between increased dermatoglyphic asymmetry and specific type II and type III lip print pattern in parents of children born with cleft. This could act as an important screening marker for the prediction of cleft lip and cleft palate inheritance.

  2. Emergence of Pathogenic Coronaviruses in Cats by Homologous Recombination between Feline and Canine Coronaviruses

    PubMed Central

    Terada, Yutaka; Matsui, Nobutaka; Noguchi, Keita; Kuwata, Ryusei; Shimoda, Hiroshi; Soma, Takehisa; Mochizuki, Masami; Maeda, Ken

    2014-01-01

    Type II feline coronavirus (FCoV) emerged via double recombination between type I FCoV and type II canine coronavirus (CCoV). In this study, two type I FCoVs, three type II FCoVs and ten type II CCoVs were genetically compared. The results showed that three Japanese type II FCoVs, M91-267, KUK-H/L and Tokyo/cat/130627, also emerged by homologous recombination between type I FCoV and type II CCoV and their parent viruses were genetically different from one another. In addition, the 3′-terminal recombination sites of M91-267, KUK-H/L and Tokyo/cat/130627 were different from one another within the genes encoding membrane and spike proteins, and the 5′-terminal recombination sites were also located at different regions of ORF1. These results indicate that at least three Japanese type II FCoVs emerged independently. Sera from a cat experimentally infected with type I FCoV was unable to neutralize type II CCoV infection, indicating that cats persistently infected with type I FCoV may be superinfected with type II CCoV. Our previous study reported that few Japanese cats have antibody against type II FCoV. All of these observations suggest that type II FCoV emerged inside the cat body and is unable to readily spread among cats, indicating that these recombination events for emergence of pathogenic coronaviruses occur frequently. PMID:25180686

  3. Teaching Statistics with Minitab II.

    ERIC Educational Resources Information Center

    Ryan, T. A., Jr.; And Others

    Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…

  4. On High and Low Starting Frequencies of Type II Radio Bursts

    NASA Astrophysics Data System (ADS)

    Sharma, J.; Mittal, N.

    2017-06-01

    We have studied the characteristics of type II radio burst during the period May 1996 to March 2015, for the solar cycle 23 and 24, observed by WIND/WAVES radio instrument. A total of 642 events were recorded by the instrument during the study period. We have divided the events with two starting frequency range (high > 1 MHz; low ≤ 1MHz) as type II1 (i.e., 1-16 MHz) radio burst and type II2 (i.e., 20 KHz - 1020 KHz) radio burst which constitute the DH and km type II radio burst observed by WIND spacecraft, and determined their time and frequency characteristics. The mean drift rate of type II1 and type II2 radio bursts is 29.76 × 10-4 MHz/s and 0.17 × 10-4 MHz/s respectively, which shows that type II1 with high start frequency hase larger drift rate than the type II2 with low starting frequencies. We have also reported that the start frequency and the drift rate of type II1 are in good correlation, with a linear correlation coefficient of 0.58.

  5. Effects of a Multicomponent Life-Style Intervention on Weight, Glycemic Control, Depressive Symptoms, and Renal Function in Low-Income, Minority Patients With Type 2 Diabetes: Results of the Community Approach to Lifestyle Modification for Diabetes Randomized Controlled Trial.

    PubMed

    Moncrieft, Ashley E; Llabre, Maria M; McCalla, Judith Rey; Gutt, Miriam; Mendez, Armando J; Gellman, Marc D; Goldberg, Ronald B; Schneiderman, Neil

    2016-09-01

    Few interventions have combined life-style and psychosocial approaches in the context of Type 2 diabetes management. The purpose of this study was to determine the effect of a multicomponent behavioral intervention on weight, glycemic control, renal function, and depressive symptoms in a sample of overweight/obese adults with Type 2 diabetes and marked depressive symptoms. A sample of 111 adults with Type 2 diabetes were randomly assigned to a 1-year intervention (n = 57) or usual care (n = 54) in a parallel groups design. Primary outcomes included weight, glycosylated hemoglobin, and Beck Depression Inventory II score. Estimated glomerular filtration rate served as a secondary outcome. All measures were assessed at baseline and 6 and 12 months after randomization by assessors blind to randomization. Latent growth modeling was used to examine intervention effects on each outcome. The intervention resulted in decreased weight (mean [M] = 0.322 kg, standard error [SE] = 0.124 kg, p = .010) and glycosylated hemoglobin (M = 0.066%, SE = 0.028%, p = .017), and Beck Depression Inventory II scores (M = 1.009, SE = 0.226, p < .001), and improved estimated glomerular filtration rate (M = 0.742 ml·min·1.73 m, SE = 0.318 ml·min·1.73 m, p = .020) each month during the first 6 months relative to usual care. Multicomponent behavioral interventions targeting weight loss and depressive symptoms as well as diet and physical activity are efficacious in the management of Type 2 diabetes. This study is registered at Clinicaltrials.gov ID: NCT01739205.

  6. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  7. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  8. Impact of miscommunication in medical dispute cases in Japan.

    PubMed

    Aoki, Noriaki; Uda, Kenji; Ohta, Sachiko; Kiuchi, Takahiro; Fukui, Tsuguya

    2008-10-01

    Medical disputes between physicians and patients can occur in non-negligent circumstances and may even result in compensation. We reviewed medical dispute cases to investigate the impact of miscommunication, especially in non-negligent situations. Systematic review of medical dispute records was done to identify the presence of the adverse events, the type of medical error, preventability, the perception of miscommunication by patients and the amount of compensation. The study was performed in Kyoto, Japan. We analyzed 155 medical dispute cases. We compared (i) frequency of miscommunication cases between negligent and non-negligent cases, and (ii) proportions of positive compensation between non-miscommunication and miscommunication cases stratified according to the existence of negligence. Multivariate logistic analysis was conducted to assess the independent factors related to positive compensation. Approximately 40% of the medical disputes (59/155) did not involve medical error (i.e. non-negligent). In the non-negligent cases, 64.4% (38/59) involved miscommunication, whereas in dispute cases with errors, 21.9% (21/96) involved miscommunications. (P

  9. Prevalence of medication errors in primary health care at Bahrain Defence Force Hospital – prescription-based study

    PubMed Central

    Aljasmi, Fatema; Almalood, Fatema

    2018-01-01

    Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304

  10. Accurate Typing of Human Leukocyte Antigen Class I Genes by Oxford Nanopore Sequencing.

    PubMed

    Liu, Chang; Xiao, Fangzhou; Hoisington-Lopez, Jessica; Lang, Kathrin; Quenzel, Philipp; Duffy, Brian; Mitra, Robi David

    2018-04-03

    Oxford Nanopore Technologies' MinION has expanded the current DNA sequencing toolkit by delivering long read lengths and extreme portability. The MinION has the potential to enable expedited point-of-care human leukocyte antigen (HLA) typing, an assay routinely used to assess the immunologic compatibility between organ donors and recipients, but the platform's high error rate makes it challenging to type alleles with accuracy. We developed and validated accurate typing of HLA by Oxford nanopore (Athlon), a bioinformatic pipeline that i) maps nanopore reads to a database of known HLA alleles, ii) identifies candidate alleles with the highest read coverage at different resolution levels that are represented as branching nodes and leaves of a tree structure, iii) generates consensus sequences by remapping the reads to the candidate alleles, and iv) calls the final diploid genotype by blasting consensus sequences against the reference database. Using two independent data sets generated on the R9.4 flow cell chemistry, Athlon achieved a 100% accuracy in class I HLA typing at the two-field resolution. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  11. A role for midline and intralaminar thalamus in the associative blocking of Pavlovian fear conditioning.

    PubMed

    Sengupta, Auntora; McNally, Gavan P

    2014-01-01

    Fear learning occurs in response to positive prediction error, when the expected outcome of a conditioning trial exceeds that predicted by the conditioned stimuli present. This role for error in Pavlovian association formation is best exemplified by the phenomenon of associative blocking, whereby prior fear conditioning of conditioned stimulus (CS) A is able to prevent learning to CSB when they are conditioned in compound. The midline and intralaminar thalamic nuclei (MIT) are well-placed to contribute to fear prediction error because they receive extensive projections from the midbrain periaqueductal gray-which has a key role in fear prediction error-and project extensively to prefrontal cortex and amygdala. Here we used an associative blocking design to study the role of MIT in fear learning. In Stage I rats were trained to fear CSA via pairings with shock. In Stage II rats received compound fear conditioning of CSAB paired with shock. On test, rats that received Stage I training expressed less fear to CSB relative to control rats that did not receive this training. Microinjection of bupivacaine into MIT prior to Stage II training had no effect on the expression of fear during Stage II and had no effect on fear learning in controls, but prevented associative blocking and so enabled fear learning to CSB. These results show an important role for MIT in predictive fear learning and are discussed with reference to previous findings implicating the midline and posterior intralaminar thalamus in fear learning and fear responding.

  12. A spline-based approach for computing spatial impulse responses.

    PubMed

    Ellis, Michael A; Guenther, Drake; Walker, William F

    2007-05-01

    Computer simulations are an essential tool for the design of phased-array ultrasonic imaging systems. FIELD II, which determines the two-way temporal response of a transducer at a point in space, is the current de facto standard for ultrasound simulation tools. However, the need often arises to obtain two-way spatial responses at a single point in time, a set of dimensions for which FIELD II is not well optimized. This paper describes an analytical approach for computing the two-way, far-field, spatial impulse response from rectangular transducer elements under arbitrary excitation. The described approach determines the response as the sum of polynomial functions, making computational implementation quite straightforward. The proposed algorithm, named DELFI, was implemented as a C routine under Matlab and results were compared to those obtained under similar conditions from the well-established FIELD II program. Under the specific conditions tested here, the proposed algorithm was approximately 142 times faster than FIELD II for computing spatial sensitivity functions with similar amounts of error. For temporal sensitivity functions with similar amounts of error, the proposed algorithm was about 1.7 times slower than FIELD II using rectangular elements and 19.2 times faster than FIELD II using triangular elements. DELFI is shown to be an attractive complement to FIELD II, especially when spatial responses are needed at a specific point in time.

  13. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  14. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about amore » spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  15. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  16. Detecting paroxysmal coughing from pertussis cases using voice recognition technology.

    PubMed

    Parker, Danny; Picone, Joseph; Harati, Amir; Lu, Shuang; Jenkyns, Marion H; Polgreen, Philip M

    2013-01-01

    Pertussis is highly contagious; thus, prompt identification of cases is essential to control outbreaks. Clinicians experienced with the disease can easily identify classic cases, where patients have bursts of rapid coughing followed by gasps, and a characteristic whooping sound. However, many clinicians have never seen a case, and thus may miss initial cases during an outbreak. The purpose of this project was to use voice-recognition software to distinguish pertussis coughs from croup and other coughs. We collected a series of recordings representing pertussis, croup and miscellaneous coughing by children. We manually categorized coughs as either pertussis or non-pertussis, and extracted features for each category. We used Mel-frequency cepstral coefficients (MFCC), a sampling rate of 16 KHz, a frame Duration of 25 msec, and a frame rate of 10 msec. The coughs were filtered. Each cough was divided into 3 sections of proportion 3-4-3. The average of the 13 MFCCs for each section was computed and made into a 39-element feature vector used for the classification. We used the following machine learning algorithms: Neural Networks, K-Nearest Neighbor (KNN), and a 200 tree Random Forest (RF). Data were reserved for cross-validation of the KNN and RF. The Neural Network was trained 100 times, and the averaged results are presented. After categorization, we had 16 examples of non-pertussis coughs and 31 examples of pertussis coughs. Over 90% of all pertussis coughs were properly classified as pertussis. The error rates were: Type I errors of 7%, 12%, and 25% and Type II errors of 8%, 0%, and 0%, using the Neural Network, Random Forest, and KNN, respectively. Our results suggest that we can build a robust classifier to assist clinicians and the public to help identify pertussis cases in children presenting with typical symptoms.

  17. Does the Cognitive Reflection Test actually capture heuristic versus analytic reasoning styles in older adults?

    PubMed

    Hertzog, Christopher; Smith, R Marit; Ariel, Robert

    2018-01-01

    Background/Study Context: This study evaluated adult age differences in the original three-item Cognitive Reflection Test (CRT; Frederick, 2005, The Journal of Economic Perspectives, 19, 25-42) and an expanded seven-item version of that test (Toplak et al., 2013, Thinking and Reasoning, 20, 147-168). The CRT is a numerical problem-solving test thought to capture a disposition towards either rapid, intuition-based problem solving (Type I reasoning) or a more thoughtful, analytical problem-solving approach (Type II reasoning). Test items are designed to induce heuristically guided errors that can be avoided if using an appropriate numerical representation of the test problems. We evaluated differences between young adults and old adults in CRT performance and correlates of CRT performance. Older adults (ages 60 to 80) were paid volunteers who participated in experiments assessing age differences in self-regulated learning. Young adults (ages 17 to 35) were students participating for pay as part of a project assessing measures of critical thinking skills or as a young comparison group in the self-regulated learning study. There were age differences in the number of CRT correct responses in two independent samples. Results with the original three-item CRT found older adults to have a greater relative proportion of errors based on providing the intuitive lure. However, younger adults actually had a greater proportion of intuitive errors on the long version of the CRT, relative to older adults. Item analysis indicated a much lower internal consistency of CRT items for older adults. These outcomes do not offer full support for the argument that older adults are higher in the use of a "Type I" cognitive style. The evidence was also consistent with an alternative hypothesis that age differences were due to lower levels of numeracy in the older samples. Alternative process-oriented evaluations of how older adults solve CRT items will probably be needed to determine conditions under which older adults manifest an increase in the Type I dispositional tendency to opt for superficial, heuristically guided problem representations in numerical problem-solving tasks.

  18. Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.

    PubMed

    Hampson, R E; Deadwyler, S A

    1996-11-26

    Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.

  19. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  20. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  1. Proportion of collagen type II in the extracellular matrix promotes the differentiation of human adipose-derived mesenchymal stem cells into nucleus pulposus cells.

    PubMed

    Tao, Yiqing; Zhou, Xiaopeng; Liu, Dongyu; Li, Hao; Liang, Chengzhen; Li, Fangcai; Chen, Qixin

    2016-01-01

    During degeneration process, the catabolism of collagen type II and anabolism of collagen type I in nucleus pulposus (NP) may influence the bioactivity of transplanted cells. Human adipose-derived mesenchymal stem cells (hADMSCs) were cultured as a micromass or in a series of gradual proportion hydrogels of a mix of collagen types I and II. Cell proliferation and cytotoxicity were detected using CCK-8 and LDH assays respectively. The expression of differentiation-related genes and proteins, including SOX9, aggrecan, collagen type I, and collagen type II, was examined using RT-qPCR and Western blotting. Novel phenotypic genes were also detected by RT-qPCR and western blotting. Alcian blue and dimethylmethylene blue assays were used to investigate sulfate proteoglycan expression, and PI3K/AKT, MAPK/ERK, and Smad signaling pathways were examined by Western blotting. The results showed collagen hydrogels have good biocompatibility, and cell proliferation increased after collagen type II treatment. Expressions of SOX9, aggrecan, and collagen type II were increased in a collagen type II dependent manner. Sulfate proteoglycan synthesis increased in proportion to collagen type II concentration. Only hADMSCs highly expressed NP cell marker KRT19 in collagen type II culture. Additionally, phosphorylated Smad3, which is associated with phosphorylated ERK, was increased after collagen type II-stimulation. The concentration and type of collagen affect hADMSC differentiation into NP cells. Collagen type II significantly ameliorates hADMSC differentiation into NP cells and promotes extracellular matrix synthesis. Therefore, anabolism of collagen type I and catabolism of type II may attenuate the differentiation and biosynthesis of transplanted stem cells. © 2016 International Union of Biochemistry and Molecular Biology.

  2. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  3. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  4. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  5. TYPE Ia SUPERNOVA COLORS AND EJECTA VELOCITIES: HIERARCHICAL BAYESIAN REGRESSION WITH NON-GAUSSIAN DISTRIBUTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu

    2014-12-20

    We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocitymore » (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10{sup 3} km s{sup –1}){sup –1} for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.« less

  6. Understanding Shock Dynamics in the Inner Heliosphere with Modeling and Type II Radio Data: the 2010-04-03 Event

    NASA Technical Reports Server (NTRS)

    Xie, Hong Na; Odstrcil, Dusan; Mays, L.; Cyr, O. C. St.; Gopalswamy, N.; Cremades, H.

    2012-01-01

    The 2010 April 03 solar event was studied using observations from STEREO SECCHI, SOHO LASCO, and Wind kilometric Type II data (kmTII) combined with WSA-Cone-ENLIL model simulations performed at the Community Coordinated Modeling Center (CCMC). In particular, we identified the origin of the coronal mass ejection (CME) using STEREO EUVI and SOHO EIT images. A flux-rope model was fit to the SECCHI A and B, and LASCO images to determine the CMEs direction, size, and actual speed. J-maps from STEREO COR2HI-1HI-2 and simulations fromCCMC were used to study the formation and evolution of the shock in the inner heliosphere. In addition, we also studied the time-distance profile of the shock propagation from kmTII radio burst observations. The J-maps together with in-situ datafrom the Wind spacecraft provided an opportunity to validate the simulation results andthe kmTII prediction. Here we report on a comparison of two methods of predictinginterplanetary shock arrival time: the ENLIL model and the kmTII method; andinvestigate whether or not using the ENLIL model density improves the kmTIIprediction. We found that the ENLIL model predicted the kinematics of shock evolutionwell. The shock arrival times (SAT) and linear-fit shock velocities in the ENLILmodel agreed well with those measurements in the J-maps along both the CME leading edge and the Sun-Earth line. The ENLIL model also reproduced most of the largescale structures of the shock propagation and gave the SAT prediction at Earth with an error of 17 hours. The kmTII method predicted the SAT at Earth with an error of 15 hours when using n0 4.16 cm3, the ENLIL model plasma density near Earth; but itimproved to 2 hours when using n0 6.64 cm3, the model density near the CMEleading edge at 1 AU.

  7. 46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...

  8. 46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...

  9. 46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...

  10. 46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...

  11. 46 CFR 531.8 - Amendment, correction, cancellation, and electronic transmission errors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., cancellation, and electronic transmission errors. (a) Amendment. (1) NSAs may be amended by mutual agreement of... § 531.5 and Appendix A to this part. (i) Where feasible, NSAs should be amended by amending only the affected specific term(s) or subterms. (ii) Each time any part of an NSA is amended, the filer shall assign...

  12. Validation, Edits, and Application Processing Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    The impact of quality assurance procedures on the correct award of Basic Educational Opportunity Grants (BEOGs) for 1979-1980 was assessed, and a model for detecting error-prone applications early in processing was developed. The Bureau of Student Financial Aid introduced new comments into the edit system in 1979 and expanded the pre-established…

  13. Modic changes in lumbar spine: prevalence and distribution patterns of end plate oedema and end plate sclerosis.

    PubMed

    Xu, Lei; Chu, Bin; Feng, Yang; Xu, Feng; Zou, Yue-Fen

    2016-01-01

    The purpose of this study is to evaluate the distribution of end plate oedema in different types of Modic change especially in mixed type and to analyze the presence of end plate sclerosis in various types of Modic change. 276 patients with low back pain were scanned with 1.5-T MRI. Three radiologists assessed the MR images by T1 weighted, T2 weighted and fat-saturation T2 weighted sequences and classified them according to the Modic changes. Pure oedematous end plate signal changes were classified as Modic Type I; pure fatty end plate changes were classified as Modic Type II; and pure sclerotic end plate changes as Modic Type III. A mixed feature of both Types I and II with predominant oedematous signal change is classified as Modic I-II, and a mixture of Types I and II with predominant fatty change is classified as Modic II-I. Thus, the mixed types can further be subdivided into seven subtypes: Types I-II, Types II-I, Types I-III, Types III-I, Types II-III, Types III-II and Types I-III. During the same period, 52 of 276 patients who underwent CT and MRI were retrospectively reviewed to determine end plate sclerosis. (1) End plate oedema: of the 2760 end plates (276 patients) examined, 302 end plates showed Modic changes, of which 82 end plates showed mixed Modic changes. The mixed Modic changes contain 92.7% of oedematous changes. The mixed types especially Types I-II and Types II-I made up the majority of end plate oedematous changes. (2) End plate sclerosis: 52 of 276 patients were examined by both MRI and CT. Of the 520 end plates, 93 end plates showed Modic changes, of which 34 end plates have shown sclerotic changes in CT images. 11.8% of 34 end plates have shown Modic Type I, 20.6% of 34 end plates have shown Modic Type II, 2.9% of 34 end plates have shown Modic Type III and 64.7% of 34 end plates have shown mixed Modic type. End plate oedema makes up the majority of mixed types especially Types I-II and Types II-I. The end plate sclerosis on CT images may not just mean Modic Type III but does exist in all types of Modic changes, especially in mixed Modic types, and may reflect vertebral body mineralization rather than change in the bone marrow. End plate oedema and end plate sclerosis are present in a large proportion of mixed types.

  14. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  15. Evaluation of drug administration errors in a teaching hospital.

    PubMed

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  16. Quantifying the burden of opioid medication errors in adult oncology and palliative care settings: A systematic review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L

    2016-06-01

    Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.

  17. Iatrogenic Errors during Root Canal Instrumentation Performed by Dental Students

    PubMed Central

    Hendi, Seyedeh Sareh; Karkehabadi, Hamed; Eskandarloo, Amir

    2018-01-01

    Introduction: The present study was set to investigate the training quality and its association with the quality of root canal therapy performed by fifth year dentistry students. Methods and Materials: A total number of 432 records of endodontic treatment performed by fifth year dentistry students were qualified to be further investigated. Radiographs were assessed by two independent endodontists. Apical transportation, apical perforation, gouging, ledge formation, and the quality of temporary restoration were error types investigated in the present study. Results: the prevalence of apical transportation, ledge formation, and apical perforation errors were significantly higher in molars in comparison with other types of teeth. The most prevalent type of error was the apical transportation, which was significantly higher in mandibular teeth. There was no significant differences among teeth in terms of other types of errors. Conclusion: The quality of training provided for dentistry students should be improved and endodontic curriculum should be modified. PMID:29692848

  18. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  19. Genetics Home Reference: distal hereditary motor neuropathy, type II

    MedlinePlus

    ... hereditary motor neuropathy, type II Distal hereditary motor neuropathy, type II Printable PDF Open All Close All ... the expand/collapse boxes. Description Distal hereditary motor neuropathy, type II is a progressive disorder that affects ...

  20. Measures of rowing performance.

    PubMed

    Smith, T Brett; Hopkins, Will G

    2012-04-01

    Accurate measures of performance are important for assessing competitive athletes in practi~al and research settings. We present here a review of rowing performance measures, focusing on the errors in these measures and the implications for testing rowers. The yardstick for assessing error in a performance measure is the random variation (typical or standard error of measurement) in an elite athlete's competitive performance from race to race: ∼1.0% for time in 2000 m rowing events. There has been little research interest in on-water time trials for assessing rowing performance, owing to logistic difficulties and environmental perturbations in performance time with such tests. Mobile ergometry via instrumented oars or rowlocks should reduce these problems, but the associated errors have not yet been reported. Measurement of boat speed to monitor on-water training performance is common; one device based on global positioning system (GPS) technology contributes negligible extra random error (0.2%) in speed measured over 2000 m, but extra error is substantial (1-10%) with other GPS devices or with an impeller, especially over shorter distances. The problems with on-water testing have led to widespread use of the Concept II rowing ergometer. The standard error of the estimate of on-water 2000 m time predicted by 2000 m ergometer performance was 2.6% and 7.2% in two studies, reflecting different effects of skill, body mass and environment in on-water versus ergometer performance. However, well trained rowers have a typical error in performance time of only ∼0.5% between repeated 2000 m time trials on this ergometer, so such trials are suitable for tracking changes in physiological performance and factors affecting it. Many researchers have used the 2000 m ergometer performance time as a criterion to identify other predictors of rowing performance. Standard errors of the estimate vary widely between studies even for the same predictor, but the lowest errors (~1-2%) have been observed for peak power output in an incremental test, some measures of lactate threshold and measures of 30-second all-out power. Some of these measures also have typical error between repeated tests suitably low for tracking changes. Combining measures via multiple linear regression needs further investigation. In summary, measurement of boat speed, especially with a good GPS device, has adequate precision for monitoring training performance, but adjustment for environmental effects needs to be investigated. Time trials on the Concept II ergometer provide accurate estimates of a rower's physiological ability to output power, and some submaximal and brief maximal ergometer performance measures can be used frequently to monitor changes in this ability. On-water performance measured via instrumented skiffs that determine individual power output may eventually surpass measures derived from the Concept II.

  1. Differential impact of methylphenidate and atomoxetine on sustained attention in youth with attention-deficit/hyperactivity disorder.

    PubMed

    Bédard, Anne-Claude V; Stein, Mark A; Halperin, Jeffrey M; Krone, Beth; Rajwan, Estrella; Newcorn, Jeffrey H

    2015-01-01

    This study examined the effects of atomoxetine (ATX) and OROS methylphenidate (MPH) on laboratory measures of inhibitory control and attention in youth with attention-deficit/hyperactivity disorder (ADHD). It was hypothesized that performance would be improved by both treatments, but response profiles would differ because the medications work via different mechanisms. One hundred and two youth (77 male; mean age = 10.5 ± 2.7 years) with ADHD received ATX (1.4 ± 0.5 mg/kg) and MPH (52.4 ± 16.6 mg) in a randomized, double-blind, crossover design. Medication was titrated in 4-6-week blocks separated by a 2-week placebo washout. Inhibitory control and attention measures were obtained at baseline, following washout, and at the end of each treatment using Conners' Continuous Performance Test II (CPT-II), which provided age-adjusted T-scores for reaction time (RT), reaction time variability (RT variability), and errors. Repeated-measures analyses of variance were performed, with Time (premedication, postmedication) and Treatment type (ATX, MPH) entered as within-subject factors. Data from the two treatment blocks were checked for order effects and combined if order effects were not present. Clinicaltrials.gov: NCT00183391. Main effects for Time on RT (p = .03), RTSD (p = .001), and omission errors (p = .01) were significant. A significant Drug × Time interaction indicated that MPH improved RT, RTSD, and omission errors more than ATX (p < .05). Changes in performance with treatment did not correlate with changes in ADHD symptoms. MPH has greater effects than ATX on CPT measures of sustained attention in youth with ADHD. However, the dissociation of cognitive and behavioral change with treatment indicates that CPT measures cannot be considered proxies for symptomatic improvement. Further research on the dissociation of cognitive and behavioral endpoints for ADHD is indicated. © 2014 The Authors. Journal of Child Psychology and Psychiatry. © 2014 Association for Child and Adolescent Mental Health.

  2. Boron Abundances in A and B-type Stars

    NASA Technical Reports Server (NTRS)

    Lambert, David L.

    1997-01-01

    Boron abundances in A- and B-type stars may be a successful way to track evolutionary effects in these hot stars. The light elements - Li, Be, and B - are tracers of exposure to temperatures more moderate than those in which the H-burning CN-cycle operates. Thus, any exposure of surface stellar layers to deeper layers will affect these light element abundances. Li and Be are used in this role in investigations of evolutionary processes in cool stars, but are not observable in hotter stars. An investigation of boron, however, is possible through the B II 1362 A resonance line. We have gathered high resolution spectra from the IUE database of A- and B-type stars near 10 solar mass for which nitrogen abundances have been determined. The B II 1362 A line is blended throughout; the temperature range of this program, requiring spectrum syntheses to recover the boron abundances. For no star could we synthesize the 1362 A region using the meteoritic/solar boron abundance of log e (B) = 2.88; a lower boron abundance was necessary which may reflect evolutionary effects (e.g., mass loss or mixing near the main-sequence), the natal composition of the star forming regions, or a systematic error in the analyses (e.g., non-LTE effects). Regardless of the initial boron abundance, and despite the possibility of non-LTE effects, it seems clear that boron is severely depleted in some stars. It may be that the nitrogen and boron abundances are anticorrelated, as would be expected from mixing between the H-burning and outer stellar layers. If, as we suspect, a residue of boron is present in the A-type supergiants, we may exclude a scenario in which mixing occurs continuously between the surface and the deep layers operating the CN-cycle. Further exploitation of the B II 1362 A line as an indicator of the evolutionary status of A- and B-type stars will require a larger stellar sample to be observed with higher signal-to-noise as attainable with the Hubble Space Telescope.

  3. MP estimation applied to platykurtic sets of geodetic observations

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Zbigniew

    2017-06-01

    MP estimation is a method which concerns estimating of the location parameters when the probabilistic models of observations differ from the normal distributions in the kurtosis or asymmetry. The system of Pearson's distributions is the probabilistic basis for the method. So far, such a method was applied and analyzed mostly for leptokurtic or mesokurtic distributions (Pearson's distributions of types IV or VII), which predominate practical cases. The analyses of geodetic or astronomical observations show that we may also deal with sets which have moderate asymmetry or small negative excess kurtosis. Asymmetry might result from the influence of many small systematic errors, which were not eliminated during preprocessing of data. The excess kurtosis can be related with bigger or smaller (in relations to the Hagen hypothesis) frequency of occurrence of the elementary errors which are close to zero. Considering that fact, this paper focuses on the estimation with application of the Pearson platykurtic distributions of types I or II. The paper presents the solution of the corresponding optimization problem and its basic properties. Although platykurtic distributions are rare in practice, it was an interesting issue to find out what results can be provided by MP estimation in the case of such observation distributions. The numerical tests which are presented in the paper are rather limited; however, they allow us to draw some general conclusions.

  4. The efficacy of dexamethasone on reduction in the reoperation rate of chronic subdural hematoma – the DRESH study: straightforward study protocol for a randomized controlled trial

    PubMed Central

    2014-01-01

    Background Chronic subdural hematoma (cSDH) is a common neurosurgical disease. It is often considered to be a rather benign entity. In spite of well established surgical procedures cSDH is complicated by a recurrence rate up to 30%. Since glucocorticoids have been used for treatment of cSDH in 1962 their role is still discussed controversially in lack of evident data. On the basis of the ascertained inflammation cycle in cSDH dexamethasone will be an ideal substance for a short lasting, concomitant treatment protocol. Objective: to test the efficacy of dexamethasone on reduction inthe reoperation rate of cSDH. Methods/Design The study is designed as a double-blind randomized placebo-controlled trial 820 patients who are operated for cSDH and from the age of 25 years are included after obtaining informed consent. They are randomized for administration of dexamethasone (16-16-12-12-8-4 mg/d) or placebo (maltodextrin) during the first 48 hours after surgery. The type I error is 5% and the type II error is 20%. The primary endpoint is the reoperation within 12 weeks postoperative. Discussion This study tests whether dexamethasone administered over 6 days is a safe and potent agent in relapse prevention for evacuated cSDH. Trial registration EudraCT 201100354442 PMID:24393328

  5. Floquet Weyl semimetals in light-irradiated type-II and hybrid line-node semimetals

    NASA Astrophysics Data System (ADS)

    Chen, Rui; Zhou, Bin; Xu, Dong-Hui

    2018-04-01

    Type-II Weyl semimetals have recently attracted intensive research interest because they host Lorentz-violating Weyl fermions as quasiparticles. The discovery of type-II Weyl semimetals evokes the study of type-II line-node semimetals (LNSMs) whose linear dispersion is strongly tilted near the nodal ring. We present here a study on the circularly polarized light-induced Floquet states in type-II LNSMs, as well as those in hybrid LNSMs that have a partially overtilted linear dispersion in the vicinity of the nodal ring. We illustrate that two distinct types of Floquet Weyl semimetal (WSM) states can be induced in periodically driven type-II and hybrid LNSMs, and the type of Floquet WSMs can be tuned by the direction and intensity of the incident light. We construct phase diagrams of light-irradiated type-II and hybrid LNSMs which are quite distinct from those of light-irradiated type-I LNSMs. Moreover, we show that photoinduced Floquet type-I and type-II WSMs can be characterized by the emergence of different anomalous Hall conductivities.

  6. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  7. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  8. Molecular determinants on the insect sodium channel for the specific action of type II pyrethroid insecticides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du Yuzhe; Nomura, Yoshiko; Luo Ningguang

    2009-01-15

    Pyrethroid insecticides are classified as type I or type II based on their distinct symptomology and effects on sodium channel gating. Structurally, type II pyrethroids possess an {alpha}-cyano group at the phenylbenzyl alcohol position, which is lacking in type I pyrethroids. Both type I and type II pyrethroids inhibit deactivation consequently prolonging the opening of sodium channels. However, type II pyrethroids inhibit the deactivation of sodium channels to a greater extent than type I pyrethroids inducing much slower decaying of tail currents upon repolarization. The molecular basis of a type II-specific action, however, is not known. Here we report themore » identification of a residue G{sup 1111} and two positively charged lysines immediately downstream of G{sup 1111} in the intracellular linker connecting domains II and III of the cockroach sodium channel that are specifically involved in the action of type II pyrethroids, but not in the action of type I pyrethroids. Deletion of G{sup 1111}, a consequence of alternative splicing, reduced the sodium channel sensitivity to type II pyrethroids, but had no effect on channel sensitivity to type I pyrethroids. Interestingly, charge neutralization or charge reversal of two positively charged lysines (Ks) downstream of G{sup 1111} had a similar effect. These results provide the molecular insight into the type II-specific interaction of pyrethroids with the sodium channel at the molecular level.« less

  9. Interplanetary type II radio bursts and their association with CMEs and flares

    NASA Astrophysics Data System (ADS)

    Shanmugaraju, A.; Suresh, K.; Vasanth, V.; Selvarani, G.; Umapathy, S.

    2018-06-01

    We study the characteristics of the CMEs and their association with the end-frequency of interplanetary (IP)-type-II bursts by analyzing a set of 138 events (IP-type-II bursts-flares-CMEs) observed during the period 1997-2012. The present analysis consider only the type II bursts having starting frequency < 14 MHz to avoid the extension of coronal type IIs. The selected events are classified into three groups depending on the end-frequency of type IIs as follows, (A) Higher, (B) Intermediate and (C) Lower end-frequency. We compare characteristics of CMEs, flares and type II burst for the three selected groups of events and report some of the important differences. The observed height of CMEs is compared with the height of IP type IIs estimated using the electron density models. By applying a density multiplier (m) to this model, the density has been constrained both in the upper corona and in the interplanetary medium, respectively as m= 1 to 10 and m = 1 to 3. This study indicates that there is a correlation between the observed CME height and estimated type II height for groups B and C events whereas this correlation is absent in group A. In all the groups (A, B & C), the different heights of CMEs and type II reveal that the type IIs are not only observed at the nose but also at the flank of the CMEs.

  10. PacRIM II: A review of AirSAR operations and system performance

    NASA Technical Reports Server (NTRS)

    Moller, D.; Chu, A.; Lou, Y.; Miller, T.; O'Leary, E.

    2001-01-01

    In this paper we briefly review the AirSAR system, its expected performance, and quality of data obtained during that mission. We discuss the system hardware calibration methodologies, and present quantitative performance values of radar backscatter and interferometric height errors (random and systematic) from PACRIM II calibration data.

  11. Online beam energy measurement of Beijing electron positron collider II linear accelerator

    NASA Astrophysics Data System (ADS)

    Wang, S.; Iqbal, M.; Liu, R.; Chi, Y.

    2016-02-01

    This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.

  12. Online beam energy measurement of Beijing electron positron collider II linear accelerator.

    PubMed

    Wang, S; Iqbal, M; Liu, R; Chi, Y

    2016-02-01

    This paper describes online beam energy measurement of Beijing Electron Positron Collider upgraded version II linear accelerator (linac) adequately. It presents the calculation formula, gives the error analysis in detail, discusses the realization in practice, and makes some verification. The method mentioned here measures the beam energy by acquiring the horizontal beam position with three beam position monitors (BPMs), which eliminates the effect of orbit fluctuation, and is much better than the one using the single BPM. The error analysis indicates that this online measurement has further potential usage such as a part of beam energy feedback system. The reliability of this method is also discussed and demonstrated in this paper.

  13. Solar Type II Radio Bursts and IP Type II Events

    NASA Technical Reports Server (NTRS)

    Cane, H. V.; Erickson, W. C.

    2005-01-01

    We have examined radio data from the WAVES experiment on the Wind spacecraft in conjunction with ground-based data in order to investigate the relationship between the shocks responsible for metric type II radio bursts and the shocks in front of coronal mass ejections (CMEs). The bow shocks of fast, large CMEs are strong interplanetary (IP) shocks, and the associated radio emissions often consist of single broad bands starting below approx. 4 MHz; such emissions were previously called IP type II events. In contrast, metric type II bursts are usually narrowbanded and display two harmonically related bands. In addition to displaying complete dynamic spectra for a number of events, we also analyze the 135 WAVES 1 - 14 MHz slow-drift time periods in 2001-2003. We find that most of the periods contain multiple phenomena, which we divide into three groups: metric type II extensions, IP type II events, and blobs and bands. About half of the WAVES listings include probable extensions of metric type II radio bursts, but in more than half of these events, there were also other slow-drift features. In the 3 yr study period, there were 31 IP type II events; these were associated with the very fastest CMEs. The most common form of activity in the WAVES events, blobs and bands in the frequency range between 1 and 8 MHz, fall below an envelope consistent with the early signatures of an IP type II event. However, most of this activity lasts only a few tens of minutes, whereas IP type II events last for many hours. In this study we find many examples in the radio data of two shock-like phenomena with different characteristics that occur simultaneously in the metric and decametric/hectometric bands, and no clear example of a metric type II burst that extends continuously down in frequency to become an IP type II event. The simplest interpretation is that metric type II bursts, unlike IP type II events, are not caused by shocks driven in front of CMEs.

  14. Directly patching high-level exchange-correlation potential based on fully determined optimized effective potentials

    NASA Astrophysics Data System (ADS)

    Huang, Chen; Chi, Yu-Chieh

    2017-12-01

    The key element in Kohn-Sham (KS) density functional theory is the exchange-correlation (XC) potential. We recently proposed the exchange-correlation potential patching (XCPP) method with the aim of directly constructing high-level XC potential in a large system by patching the locally computed, high-level XC potentials throughout the system. In this work, we investigate the patching of the exact exchange (EXX) and the random phase approximation (RPA) correlation potentials. A major challenge of XCPP is that a cluster's XC potential, obtained by solving the optimized effective potential equation, is only determined up to an unknown constant. Without fully determining the clusters' XC potentials, the patched system's XC potential is "uneven" in the real space and may cause non-physical results. Here, we developed a simple method to determine this unknown constant. The performance of XCPP-RPA is investigated on three one-dimensional systems: H20, H10Li8, and the stretching of the H19-H bond. We investigated two definitions of EXX: (i) the definition based on the adiabatic connection and fluctuation dissipation theorem (ACFDT) and (ii) the Hartree-Fock (HF) definition. With ACFDT-type EXX, effective error cancellations were observed between the patched EXX and the patched RPA correlation potentials. Such error cancellations were absent for the HF-type EXX, which was attributed to the fact that for systems with fractional occupation numbers, the integral of the HF-type EXX hole is not -1. The KS spectra and band gaps from XCPP agree reasonably well with the benchmarks as we make the clusters large.

  15. Type I and II Endometrial Cancers: Have They Different Risk Factors?

    PubMed Central

    Setiawan, Veronica Wendy; Yang, Hannah P.; Pike, Malcolm C.; McCann, Susan E.; Yu, Herbert; Xiang, Yong-Bing; Wolk, Alicja; Wentzensen, Nicolas; Weiss, Noel S.; Webb, Penelope M.; van den Brandt, Piet A.; van de Vijver, Koen; Thompson, Pamela J.; Strom, Brian L.; Spurdle, Amanda B.; Soslow, Robert A.; Shu, Xiao-ou; Schairer, Catherine; Sacerdote, Carlotta; Rohan, Thomas E.; Robien, Kim; Risch, Harvey A.; Ricceri, Fulvio; Rebbeck, Timothy R.; Rastogi, Radhai; Prescott, Jennifer; Polidoro, Silvia; Park, Yikyung; Olson, Sara H.; Moysich, Kirsten B.; Miller, Anthony B.; McCullough, Marjorie L.; Matsuno, Rayna K.; Magliocco, Anthony M.; Lurie, Galina; Lu, Lingeng; Lissowska, Jolanta; Liang, Xiaolin; Lacey, James V.; Kolonel, Laurence N.; Henderson, Brian E.; Hankinson, Susan E.; Håkansson, Niclas; Goodman, Marc T.; Gaudet, Mia M.; Garcia-Closas, Montserrat; Friedenreich, Christine M.; Freudenheim, Jo L.; Doherty, Jennifer; De Vivo, Immaculata; Courneya, Kerry S.; Cook, Linda S.; Chen, Chu; Cerhan, James R.; Cai, Hui; Brinton, Louise A.; Bernstein, Leslie; Anderson, Kristin E.; Anton-Culver, Hoda; Schouten, Leo J.; Horn-Ross, Pamela L.

    2013-01-01

    Purpose Endometrial cancers have long been divided into estrogen-dependent type I and the less common clinically aggressive estrogen-independent type II. Little is known about risk factors for type II tumors because most studies lack sufficient cases to study these much less common tumors separately. We examined whether so-called classical endometrial cancer risk factors also influence the risk of type II tumors. Patients and Methods Individual-level data from 10 cohort and 14 case-control studies from the Epidemiology of Endometrial Cancer Consortium were pooled. A total of 14,069 endometrial cancer cases and 35,312 controls were included. We classified endometrioid (n = 7,246), adenocarcinoma not otherwise specified (n = 4,830), and adenocarcinoma with squamous differentiation (n = 777) as type I tumors and serous (n = 508) and mixed cell (n = 346) as type II tumors. Results Parity, oral contraceptive use, cigarette smoking, age at menarche, and diabetes were associated with type I and type II tumors to similar extents. Body mass index, however, had a greater effect on type I tumors than on type II tumors: odds ratio (OR) per 2 kg/m2 increase was 1.20 (95% CI, 1.19 to 1.21) for type I and 1.12 (95% CI, 1.09 to 1.14) for type II tumors (Pheterogeneity < .0001). Risk factor patterns for high-grade endometrioid tumors and type II tumors were similar. Conclusion The results of this pooled analysis suggest that the two endometrial cancer types share many common etiologic factors. The etiology of type II tumors may, therefore, not be completely estrogen independent, as previously believed. PMID:23733771

  16. Adaptive clinical trial designs for European marketing authorization: a survey of scientific advice letters from the European Medicines Agency.

    PubMed

    Elsäßer, Amelie; Regnstrom, Jan; Vetter, Thorsten; Koenig, Franz; Hemmings, Robert James; Greco, Martina; Papaluca-Amati, Marisa; Posch, Martin

    2014-10-02

    Since the first methodological publications on adaptive study design approaches in the 1990s, the application of these approaches in drug development has raised increasing interest among academia, industry and regulators. The European Medicines Agency (EMA) as well as the Food and Drug Administration (FDA) have published guidance documents addressing the potentials and limitations of adaptive designs in the regulatory context. Since there is limited experience in the implementation and interpretation of adaptive clinical trials, early interaction with regulators is recommended. The EMA offers such interactions through scientific advice and protocol assistance procedures. We performed a text search of scientific advice letters issued between 1 January 2007 and 8 May 2012 that contained relevant key terms. Letters containing questions related to adaptive clinical trials in phases II or III were selected for further analysis. From the selected letters, important characteristics of the proposed design and its context in the drug development program, as well as the responses of the Committee for Human Medicinal Products (CHMP)/Scientific Advice Working Party (SAWP), were extracted and categorized. For 41 more recent procedures (1 January 2009 to 8 May 2012), additional details of the trial design and the CHMP/SAWP responses were assessed. In addition, case studies are presented as examples. Over a range of 5½ years, 59 scientific advices were identified that address adaptive study designs in phase II and phase III clinical trials. Almost all were proposed as confirmatory phase III or phase II/III studies. The most frequently proposed adaptation was sample size reassessment, followed by dropping of treatment arms and population enrichment. While 12 (20%) of the 59 proposals for an adaptive clinical trial were not accepted, the great majority of proposals were accepted (15, 25%) or conditionally accepted (32, 54%). In the more recent 41 procedures, the most frequent concerns raised by CHMP/SAWP were insufficient justifications of the adaptation strategy, type I error rate control and bias. For the majority of proposed adaptive clinical trials, an overall positive opinion was given albeit with critical comments. Type I error rate control, bias and the justification of the design are common issues raised by the CHMP/SAWP.

  17. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  18. Fault detection and isolation in motion monitoring system.

    PubMed

    Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B

    2012-01-01

    Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.

  19. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  20. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  1. Scan Line Difference Compression Algorithm Simulation Study.

    DTIC Science & Technology

    1985-08-01

    introduced during the signal transmission process. ----------- SLDC Encoder------- I Image I IConditionedl IConditioned I LError Control I I Source I...I Error Control _____ _struction - Decoder I I Decoder I ----------- SLDC Decoder-------- Figure A-I. -- Overall Data Compression Process This...of noise or an effective channel coding subsystem providing the necessary error control . A- 2 ~~~~~~~~~ ..* : ~ -. . .- .** - .. . .** .* ... . . The

  2. Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…

  3. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavitt, Ania S.; Bylaska, Eric J.; Tratnyek, Paul G.

    As described in the main text, we classified our voltammograms into four types. For phenols, most compounds were type I or type II, except four phenols that were type III (4-nitrophenol, 4-cyanophenol, DNOC, and 4-hydroxyacetphenone); and two phenols that were type IV (4-aminophenol and dopamine). Almost all of the compounds gave the same type by SCV and SWV, except for 2,4-dinitrophenol (whose current went up and down and therefore could be considered a type II or III), 4-cyanophenol (which fell into a type III for SCV, but whose current went up and down in SWV (type II or III)), andmore » 4-hydroxyacetophenone (which was a type III in SCV, but a type II in SWV). The majority of the anilines were type I except for p-toluidine (type II) and 4-methyl-3-nitroaniline and 2-methoxy-5-nitroaniline (both were type I for SWV, but for SCV fell into type III and type II respectively).« less

  5. Oxidation potentials of phenols and anilines: correlation analysis of electrochemical and theoretical values

    DOE PAGES

    Pavitt, Ania S.; Bylaska, Eric J.; Tratnyek, Paul G.

    2017-02-10

    As described in the main text, we classified our voltammograms into four types. For phenols, most compounds were type I or type II, except four phenols that were type III (4-nitrophenol, 4-cyanophenol, DNOC, and 4-hydroxyacetphenone); and two phenols that were type IV (4-aminophenol and dopamine). Almost all of the compounds gave the same type by SCV and SWV, except for 2,4-dinitrophenol (whose current went up and down and therefore could be considered a type II or III), 4-cyanophenol (which fell into a type III for SCV, but whose current went up and down in SWV (type II or III)), andmore » 4-hydroxyacetophenone (which was a type III in SCV, but a type II in SWV). The majority of the anilines were type I except for p-toluidine (type II) and 4-methyl-3-nitroaniline and 2-methoxy-5-nitroaniline (both were type I for SWV, but for SCV fell into type III and type II respectively).« less

  6. Morphological analysis of red blood cells by polychromatic interference microscopy of thin films

    NASA Astrophysics Data System (ADS)

    Dyachenko, A. A.; Malinova, L. I.; Ryabukho, V. P.

    2016-11-01

    Red blood cells (RBC) distribution width (RDW) is a promising hematological parameter with broadapplications in clinical practice; in various studies RDWhas been shown to be associated with increased risk of heart failure (HF) in general population. It predicts mortality and other major adverse events in HF patients. In this report new method of RDWmeasurement is presented. It's based on interference color analysis of red blood cells in blood smear and further measurement of its optical thickness. Descriptive statistics of the of the RBC optical thickness distribution in a blood smear were used for RDW estimation in every studied sample. Proposed method is considered to be avoiding type II errors and minimizing the variability of measured RDW.

  7. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  8. Past and current perspective on new therapeutic targets for Type-II diabetes.

    PubMed

    Patil, Pradip D; Mahajan, Umesh B; Patil, Kalpesh R; Chaudhari, Sandip; Patil, Chandragouda R; Agrawal, Yogeeta O; Ojha, Shreesh; Goyal, Sameer N

    2017-01-01

    Loss of pancreatic β-cell function is a hallmark of Type-II diabetes mellitus (DM). It is a chronic metabolic disorder that results from defects in both insulin secretion and insulin action. Recently, United Kingdom Prospective Diabetes Study reported that Type-II DM is a progressive disorder. Although, DM can be treated initially by monotherapy with oral agent; eventually, it may require multiple drugs. Additionally, insulin therapy is needed in many patients to achieve glycemic control. Pharmacological approaches are unsatisfactory in improving the consequences of insulin resistance. Single therapeutic approach in the treatment of Type-II DM is unsuccessful and usually a combination therapy is adopted. Increased understanding of biochemical, cellular and pathological alterations in Type-II DM has provided new insight in the management of Type-II DM. Knowledge of underlying mechanisms of Type-II DM development is essential for the exploration of novel therapeutic targets. Present review provides an insight into therapeutic targets of Type-II DM and their role in the development of insulin resistance. An overview of important signaling pathways and mechanisms in Type-II DM is provided for the better understanding of disease pathology. This review includes case studies of drugs that are withdrawn from the market. The experience gathered from previous studies and knowledge of Type-II DM pathways can guide the anti-diabetic drug development toward the discovery of clinically viable drugs that are useful in Type-II DM.

  9. Hepatic glucose output in humans measured with labeled glucose to reduce negative errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, J.C.; Brown, G.; Matthews, D.R.

    Steele and others have suggested that minimizing changes in glucose specific activity when estimating hepatic glucose output (HGO) during glucose infusions could reduce non-steady-state errors. This approach was assessed in nondiabetic and type II diabetic subjects during constant low dose (27 mumol.kg ideal body wt (IBW)-1.min-1) glucose infusion followed by a 12 mmol/l hyperglycemic clamp. Eight subjects had paired tests with and without labeled infusions. Labeled infusion was used to compare HGO in 11 nondiabetic and 15 diabetic subjects. Whereas unlabeled infusions produced negative values for endogenous glucose output, labeled infusions largely eliminated this error and reduced the dependence ofmore » the Steele model on the pool fraction in the paired tests. By use of labeled infusions, 11 nondiabetic subjects suppressed HGO from 10.2 +/- 0.6 (SE) fasting to 0.8 +/- 0.9 mumol.kg IBW-1.min-1 after 90 min of glucose infusion and to -1.9 +/- 0.5 mumol.kg IBW-1.min-1 after 90 min of a 12 mmol/l glucose clamp, but 15 diabetic subjects suppressed only partially from 13.0 +/- 0.9 fasting to 5.7 +/- 1.2 at the end of the glucose infusion and 5.6 +/- 1.0 mumol.kg IBW-1.min-1 in the clamp (P = 0.02, 0.002, and less than 0.001, respectively).« less

  10. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation.

    PubMed

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T

    2013-05-14

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be "well designed"--in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian "size principle"; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel.

  11. Rational integration of noisy evidence and prior semantic expectations in sentence interpretation

    PubMed Central

    Gibson, Edward; Bergen, Leon; Piantadosi, Steven T.

    2013-01-01

    Sentence processing theories typically assume that the input to our language processing mechanisms is an error-free sequence of words. However, this assumption is an oversimplification because noise is present in typical language use (for instance, due to a noisy environment, producer errors, or perceiver errors). A complete theory of human sentence comprehension therefore needs to explain how humans understand language given imperfect input. Indeed, like many cognitive systems, language processing mechanisms may even be “well designed”–in this case for the task of recovering intended meaning from noisy utterances. In particular, comprehension mechanisms may be sensitive to the types of information that an idealized statistical comprehender would be sensitive to. Here, we evaluate four predictions about such a rational (Bayesian) noisy-channel language comprehender in a sentence comprehension task: (i) semantic cues should pull sentence interpretation towards plausible meanings, especially if the wording of the more plausible meaning is close to the observed utterance in terms of the number of edits; (ii) this process should asymmetrically treat insertions and deletions due to the Bayesian “size principle”; such nonliteral interpretation of sentences should (iii) increase with the perceived noise rate of the communicative situation and (iv) decrease if semantically anomalous meanings are more likely to be communicated. These predictions are borne out, strongly suggesting that human language relies on rational statistical inference over a noisy channel. PMID:23637344

  12. Evaluating Application Resilience with XRay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Sui; Bronevetsky, Greg; Li, Bin

    2015-05-07

    The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less

  13. On the source conditions for herringbone structure in type II solar radio bursts

    NASA Technical Reports Server (NTRS)

    Cane, H. V.; White, S. M.

    1989-01-01

    An investigation is made of the correlation of the occurrence of the herringbone phenomenon in type II solar radio bursts with various flare properties. It is shown that herringbone is strongly correlated with the intensity of the type II burst: whereas about 21 percent of all type II bursts show herringbone, about 60 percent of the most intense bursts contain herringbone. This fact can explain most of the correlations between herringbone and other properties such as intense type III bursts, type IV emission, and high type II starting frequencies. It is also shown that when this is taken into account, there is no need to postulate two classes of type II burst in order to explain why there appears to be a difference in herringbone occurrence between the set of type II bursts associated with the leading edges of coronal mass ejections, and those not so associated. It is argued that the data are consistent with the idea that all coronal type II bursts are due to blast waves from flares.

  14. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  15. [Treatment options for nystagmus].

    PubMed

    Tegetmeyer, H

    2015-02-01

    The goal of treatment for nystagmus is to reduce or to abolish the typical symptoms associated with nystagmus. These are (i) reduction of visual acuity (and amblyopia in infantile nystagmus), (ii) abnormal head posture (with possible secondary changes of cervical spine) and (iii) oscillopsia (often connected with vertigo and disorders of gait and orientation). Treatment strategies include pharmacological treatment, surgical therapy and optical devices. Choice of treatment depends on the type of nystagmus and its characteristics. The following surgical procedures were successfully used as treatment of selected symptoms: (i) unilateral recess-resect surgery of the dominant eye in infantile esotropia with latent nystagmus for the relief of abnormal head posture, (ii) Kestenbaum operation of both eyes in infantile nystagmus syndrome with excentric null zone and abnormal head posture, (iii) recess-resect surgery to produce artificial exophoria in infantile nystagmus syndrome. PHARMACOLOGICAL TREATMENT: Depending on the pathophysiology of different types of nystagmus, several drugs were effective in clinical application (off-label use): (i) gabapentin (non-selective GABAergic and anti-glutamatergic effect): up to 2400 mg/d in infantile nystagmus, acquired pendular nystagmus and oculopalatal tremor, (ii) nemantine (anti-glutamatergic effect): dosage up to 40 mg/d in infantile nystagmus, also in acquired pendular nystagmus and oculopalatal tremor, (iii) baclofen (GABA-B-receptor agonist): 3 × 5-10 mg/d in periodic alternating nystagmus and in upbeat nystagmus, (iv) 4-aminopyridine (non-selective blocker of voltage-gated potassium channels): 3 × 5 mg/d or 1-2 × 10 mg Fampridin in downbeat nystagmus and upbeat nystagmus, (v) acetazolamide (carbonic anhydrase inhibitor): in hereditary episodic ataxia type 2. OPTICAL DEVICES: (i) Contact lenses are used in infantile nystagmus in order to overcome negative effects of eye glasses in abnormal head posture, lateral gaze, and higher refractive errors, (ii) spectacle prisms are useful to induce an artificial exophoria (base-out prisms) or to shift an excentric null zone (base in direction of head posture) of infantile nystagmus with abnormal head posture, (iii) low vision aids may be necessary and should be prescribed according to magnification requirements. Georg Thieme Verlag KG Stuttgart · New York.

  16. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  17. PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.

    PubMed

    Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer

    2015-01-01

    Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.

  18. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  19. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  20. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  1. Two-stage phase II oncology designs using short-term endpoints for early stopping.

    PubMed

    Kunz, Cornelia U; Wason, James Ms; Kieser, Meinhard

    2017-08-01

    Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research.

  2. A Matched Project Evaluation of Modern Programming Practices. Volume II. Scientific Report on the ASTROS Plan.

    DTIC Science & Technology

    1980-02-01

    formula for predictinq the number of errors during system testing. The equation he presents is B V/ ECRIT where B is the number of 19 ’R , errors...expected, V is the volume, and ECRIT is "the mean number of elementary discriminations between potential errors in programming" (p. 85). E CRIT can also...prediction of delivered bugs is: "V VX 2 B = V/ ECRIT -3- 13,824 2.3 McCabe’s Complexity Metric Thomas McCabe (1976) defined complexity in relation to

  3. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  4. Implementing New Non-Chromate Coatings Systems (Briefing Charts)

    DTIC Science & Technology

    2011-02-09

    Initiate Cr6+ authorization process for continued Cr6+ use using the form, Authorization to Use Hexavalent Chromium. YES NO • Approval of...Aluminum and magnesium anodizing • Hard Chrome Plating • Type II conversion coating on aluminum alloys under chromated primer • Type II conversion coating...Elimination of Hexavalent Chromium 80% 5% 14% 1% Type II Type III Type IC Type IC Fatigue Critical 50% 50% Type II Type IC FRC-SE (JAX) Fully Integrated FRC

  5. Graduate Students' Administration and Scoring Errors on the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.

    2009-01-01

    The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…

  6. Solid rocket booster performance evaluation model. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    1974-01-01

    This users manual for the solid rocket booster performance evaluation model (SRB-II) contains descriptions of the model, the program options, the required program inputs, the program output format and the program error messages. SRB-II is written in FORTRAN and is operational on both the IBM 370/155 and the MSFC UNIVAC 1108 computers.

  7. Generalized interferometry - I: theory for interstation correlations

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; Stehly, Laurent; Ermert, Laura; Boehm, Christian

    2017-02-01

    We develop a general theory for interferometry by correlation that (i) properly accounts for heterogeneously distributed sources of continuous or transient nature, (ii) fully incorporates any type of linear and nonlinear processing, such as one-bit normalization, spectral whitening and phase-weighted stacking, (iii) operates for any type of medium, including 3-D elastic, heterogeneous and attenuating media, (iv) enables the exploitation of complete correlation waveforms, including seemingly unphysical arrivals, and (v) unifies the earthquake-based two-station method and ambient noise correlations. Our central theme is not to equate interferometry with Green function retrieval, and to extract information directly from processed interstation correlations, regardless of their relation to the Green function. We demonstrate that processing transforms the actual wavefield sources and actual wave propagation physics into effective sources and effective wave propagation. This transformation is uniquely determined by the processing applied to the observed data, and can be easily computed. The effective forward model, that links effective sources and propagation to synthetic interstation correlations, may not be perfect. A forward modelling error, induced by processing, describes the extent to which processed correlations can actually be interpreted as proper correlations, that is, as resulting from some effective source and some effective wave propagation. The magnitude of the forward modelling error is controlled by the processing scheme and the temporal variability of the sources. Applying adjoint techniques to the effective forward model, we derive finite-frequency Fréchet kernels for the sources of the wavefield and Earth structure, that should be inverted jointly. The structure kernels depend on the sources of the wavefield and the processing scheme applied to the raw data. Therefore, both must be taken into account correctly in order to make accurate inferences on Earth structure. Not making any restrictive assumptions on the nature of the wavefield sources, our theory can be applied to earthquake and ambient noise data, either separately or combined. This allows us (i) to locate earthquakes using interstation correlations and without knowledge of the origin time, (ii) to unify the earthquake-based two-station method and noise correlations without the need to exclude either of the two data types, and (iii) to eliminate the requirement to remove earthquake signals from noise recordings prior to the computation of correlation functions. In addition to the basic theory for acoustic wavefields, we present numerical examples for 2-D media, an extension to the most general viscoelastic case, and a method for the design of optimal processing schemes that eliminate the forward modelling error completely. This work is intended to provide a comprehensive theoretical foundation of full-waveform interferometry by correlation, and to suggest improvements to current passive monitoring methods.

  8. Analyzing students’ errors on fractions in the number line

    NASA Astrophysics Data System (ADS)

    Widodo, S.; Ikhwanudin, T.

    2018-05-01

    The objectives of this study are to know the type of students’ errors when they deal with fractions on the number line. This study used qualitative with a descriptive method, and involved 31 sixth grade students at one of the primary schools in Purwakarta, Indonesia. The results of this study are as follow, there are four types of student’s errors: unit confusion, tick mark interpretation error, partitioning and un partitioning error, and estimation error. We recommend that teachers should: strengthen unit understanding to the students when studying fractions, make students understand about tick mark interpretation, remind student of the importance of partitioning and un-partitioning strategy and teaches effective estimation strategies.

  9. Outcomes from ovarian cancer screening in the PLCO trial: Histologic heterogeneity impacts detection, overdiagnosis and survival.

    PubMed

    Temkin, Sarah M; Miller, Eric A; Samimi, Goli; Berg, Christine D; Pinsky, Paul; Minasian, Lori

    2017-12-01

    A mortality benefit from screening for ovarian cancer has never been demonstrated. The aim of this study was to evaluate the screening outcomes for different histologic subtypes of ovarian cancers. Women in the screening arm of the Prostate, Lung, Colorectal and Ovarian Screening Trial underwent CA-125 and transvaginal ultrasound annually for 3-5 years. We compared screening test characteristics (including overdiagnosis) and outcomes by tumour type (type II versus other) and study arm (screening versus usual care). Of 78,215 women randomised, 496 women were diagnosed with ovarian cancer. Of the tumours that were characterised (n = 413; 83%), 74% (n = 305) were type II versus 26% other (n = 108). Among screened patients, 70% of tumours were type II compared to 78% in usual care (p = 0.09). Within the screening arm, 29% of type II tumours were screen detected compared to 54% of the others (p < 0.01). The sensitivity of screening was 65% for type II tumours versus 86% for other types (p = 0.02). 15% of type II screen-detected tumours were stage I/II, compared to 81% of other tumours (p < 0.01). The overdiagnosis rate was lower for type II compared to other tumours (28.2% versus 72.2%; p < 0.01). Ovarian cancer-specific survival was worse for type II tumours compared to others (p < 0.01). Survival was similar for type II (p = 0.74) or other types (p = 0.32) regardless of study arm. Test characteristics of screening for ovarian cancer differed for type II tumours compared to other ovarian tumours. Type II tumours were less likely to be screen diagnosed, early stage at diagnosis or overdiagnosed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Errors analysis of problem solving using the Newman stage after applying cooperative learning of TTW type

    NASA Astrophysics Data System (ADS)

    Rr Chusnul, C.; Mardiyana, S., Dewi Retno

    2017-12-01

    Problem solving is the basis of mathematics learning. Problem solving teaches us to clarify an issue coherently in order to avoid misunderstanding information. Sometimes there may be mistakes in problem solving due to misunderstanding the issue, choosing a wrong concept or misapplied concept. The problem-solving test was carried out after students were given treatment on learning by using cooperative learning of TTW type. The purpose of this study was to elucidate student problem regarding to problem solving errors after learning by using cooperative learning of TTW type. Newman stages were used to identify problem solving errors in this study. The new research used a descriptive method to find out problem solving errors in students. The subject in this study were students of Vocational Senior High School (SMK) in 10th grade. Test and interview was conducted for data collection. Thus, the results of this study suggested problem solving errors in students after learning by using cooperative learning of TTW type for Newman stages.

  11. The Kinetic Mechanism for Cytochrome P450 Metabolism of Type II Binding Compounds: Evidence Supporting Direct Reduction

    PubMed Central

    Pearson, Joshua; Dahal, Upendra P.; Rock, Daniel; Peng, Chi-Chi; Schenk, James O.; Joswig-Jones, Carolyn; Jones, Jeffrey P.

    2011-01-01

    The metabolic stability of a drug is an important property that should be optimized during drug design and development. Nitrogen incorporation is hypothesized to increase the stability by coordination of nitrogen to the heme iron of cytochrome P450, a binding mode that is referred to as type II binding. However, we noticed that the type II binding compound 1 has less metabolic stability at subsaturating conditions than a closely related type I binding compound 3. Three kinetic models will be presented for type II binder metabolism; 1) Dead-end type II binding, 2) a rapid equilibrium between type I and II binding modes before reduction, and 3) a direct reduction of the type II coordinated heme. Data will be presented on reduction rates of iron, the off rates of substrate (using surface plasmon resonance) and the catalytic rate constants. These data argue against the dead-end, and rapid equilibrium models, leaving the direct reduction kinetic mechanism for metabolism of the type II binding compound 1. PMID:21530484

  12. From Cholesterogenesis to Steroidogenesis: Role of Riboflavin and Flavoenzymes in the Biosynthesis of Vitamin D12

    PubMed Central

    Pinto, John T.; Cooper, Arthur J. L.

    2014-01-01

    Flavin-dependent monooxygenases and oxidoreductases are located at critical branch points in the biosynthesis and metabolism of cholesterol and vitamin D. These flavoproteins function as obligatory intermediates that accept 2 electrons from NAD(P)H with subsequent 1-electron transfers to a variety of cytochrome P450 (CYP) heme proteins within the mitochondria matrix (type I) and the (microsomal) endoplasmic reticulum (type II). The mode of electron transfer in these systems differs slightly in the number and form of the flavin prosthetic moiety. In the type I mitochondrial system, FAD-adrenodoxin reductase interfaces with adrenodoxin before electron transfer to CYP heme proteins. In the microsomal type II system, a diflavin (FAD/FMN)-dependent cytochrome P450 oxidoreductase [NAD(P)H-cytochrome P450 reductase (CPR)] donates electrons to a multitude of heme oxygenases. Both flavoenzyme complexes exhibit a commonality of function with all CYP enzymes and are crucial for maintaining a balance of cholesterol and vitamin D metabolites. Deficits in riboflavin availability, imbalances in the intracellular ratio of FAD to FMN, and mutations that affect flavin binding domains and/or interactions with client proteins result in marked structural alterations within the skeletal and central nervous systems similar to those of disorders (inborn errors) in the biosynthetic pathways that lead to cholesterol, steroid hormones, and vitamin D and their metabolites. Studies of riboflavin deficiency during embryonic development demonstrate congenital malformations similar to those associated with genetic alterations of the flavoenzymes in these pathways. Overall, a deeper understanding of the role of riboflavin in these pathways may prove essential to targeted therapeutic designs aimed at cholesterol and vitamin D metabolism. PMID:24618756

  13. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports

    PubMed Central

    Rieger, Martina; Bart, Victoria K. E.

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256

  14. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports.

    PubMed

    Rieger, Martina; Bart, Victoria K E

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.

  15. On the accuracy of stratospheric aerosol extinction derived from in situ size distribution measurements and surface area density derived from remote SAGE II and HALOE extinction measurements

    DOE PAGES

    Kovilakam, Mahesh; Deshler, Terry

    2015-08-26

    In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less

  16. On the accuracy of stratospheric aerosol extinction derived from in situ size distribution measurements and surface area density derived from remote SAGE II and HALOE extinction measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovilakam, Mahesh; Deshler, Terry

    In situ stratospheric aerosol measurements, from University of Wyoming optical particle counters (OPCs), are compared with Stratospheric Aerosol Gas Experiment (SAGE) II (versions 6.2 and 7.0) and Halogen Occultation Experiment (HALOE) satellite measurements to investigate differences between SAGE II/HALOE-measured extinction and derived surface area and OPC-derived extinction and surface area. Coincident OPC and SAGE II measurements are compared for a volcanic (1991-1996) and nonvolcanic (1997-2005) period. OPC calculated extinctions agree with SAGE II measurements, within instrumental uncertainty, during the volcanic period, but have been a factor of 2 low during the nonvolcanic period. Three systematic errors associated with the OPCmore » measurements, anisokineticity, inlet particle evaporation, and counting efficiency, were investigated. An overestimation of the OPC counting efficiency is found to be the major source of systematic error. With this correction OPC calculated extinction increases by 15-30% (30-50%) for the volcanic (nonvolcanic) measurements. These changes significantly improve the comparison with SAGE II and HALOE extinctions in the nonvolcanic cases but slightly degrade the agreement in the volcanic period. These corrections have impacts on OPC-derived surface area density, exacerbating the poor agreement between OPC and SAGE II (version 6.2) surface areas. Furthermore, this disparity is reconciled with SAGE II version 7.0 surface areas. For both the volcanic and nonvolcanic cases these changes in OPC counting efficiency and in the operational SAGE II surface area algorithm leave the derived surface areas from both platforms in significantly better agreement and within the ± 40% precision of the OPC moment calculations.« less

  17. Assessment: transcranial Doppler ultrasonography: report of the Therapeutics and Technology Assessment Subcommittee of the American Academy of Neurology.

    PubMed

    Sloan, M A; Alexandrov, A V; Tegeler, C H; Spencer, M P; Caplan, L R; Feldmann, E; Wechsler, L R; Newell, D W; Gomez, C R; Babikian, V L; Lefkowitz, D; Goldman, R S; Armon, C; Hsu, C Y; Goodin, D S

    2004-05-11

    To review the use of transcranial Doppler ultrasonography (TCD) and transcranial color-coded sonography (TCCS) for diagnosis. The authors searched the literature for evidence of 1) if TCD provides useful information in specific clinical settings; 2) if using this information improves clinical decision making, as reflected by improved patient outcomes; and 3) if TCD is preferable to other diagnostic tests in these clinical situations. TCD is of established value in the screening of children aged 2 to 16 years with sickle cell disease for stroke risk (Type A, Class I) and the detection and monitoring of angiographic vasospasm after spontaneous subarachnoid hemorrhage (Type A, Class I to II). TCD and TCCS provide important information and may have value for detection of intracranial steno-occlusive disease (Type B, Class II to III), vasomotor reactivity testing (Type B, Class II to III), detection of cerebral circulatory arrest/brain death (Type A, Class II), monitoring carotid endarterectomy (Type B, Class II to III), monitoring cerebral thrombolysis (Type B, Class II to III), and monitoring coronary artery bypass graft operations (Type B to C, Class II to III). Contrast-enhanced TCD/TCCS can also provide useful information in right-to-left cardiac/extracardiac shunts (Type A, Class II), intracranial occlusive disease (Type B, Class II to IV), and hemorrhagic cerebrovascular disease (Type B, Class II to IV), although other techniques may be preferable in these settings.

  18. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  19. Orbital Fibroblasts From Thyroid Eye Disease Patients Differ in Proliferative and Adipogenic Responses Depending on Disease Subtype

    PubMed Central

    Kuriyan, Ajay E.; Woeller, Collynn F.; O'Loughlin, Charles W.; Phipps, Richard P.; Feldon, Steven E.

    2013-01-01

    Purpose. Thyroid eye disease (TED) patients are classified as type I (predominantly fat compartment enlargement) or type II (predominantly extraocular muscle enlargement) based on orbital imaging. Orbital fibroblasts (OFs) can be driven to proliferate or differentiate into adipocytes in vitro. We tested the hypothesis that type I OFs undergo more adipogenesis than type II OFs, whereas type II OFs proliferate more than type I OFs. We also examined the effect of cyclooxygenase (COX) inhibitors on OF adipogenesis and proliferation. Methods. Type I, type II, and non-TED OFs were treated with transforming growth factor-beta (TGFβ) to induce proliferation and with 15-deoxy-Δ−12,14-prostaglandin J2 (15d-PGJ2) to induce adipogenesis. Proliferation was measured using the [3H]thymidine assay, and adipogenesis was measured using the AdipoRed assay, Oil Red O staining, and flow cytometry. The effect of COX inhibition on adipogenesis and proliferation was also studied. Results. Type II OFs incorporated 1.7-fold more [3H]thymidine than type I OFs (P < 0.05). Type I OFs accumulated 4.8-fold more lipid than type II OFs (P < 0.05) and 12.6-fold more lipid than non-TED OFs (P < 0.05). Oil Red O staining and flow cytometry also demonstrated increased adipogenesis in type I OFs compared to type II and non-TED OFs. Cyclooxygenase inhibition significantly decreased proliferation and adipogenesis in type II OFs, but not type I OFs. Conclusions. We have demonstrated that OFs from TED patients have heterogeneous responses to proproliferative and proadipogenic stimulators in vitro in a manner that corresponds to their different clinical manifestations. Furthermore, we demonstrated a differential effect of COX inhibitors on type I and type II OF proliferation and adipogenesis. PMID:24135759

  20. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  1. From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E

    NASA Astrophysics Data System (ADS)

    Berthier, Denis

    In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.

  2. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  3. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  4. 76 FR 50509 - Self-Regulatory Organizations; NASDAQ OMX PHLX LLC; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-15

    ... Change To Correct a Typographical Error in Exchange Rule 1080 August 9, 2011. Pursuant to Section 19(b)(1... Rule 1080 (Phlx XL and XL II) to correct a typographical error. The text of the proposed rule change is... in subsection (m)(iii)(D) of Rule 1080. On July 13, 2011, the Exchange filed an immediately effective...

  5. Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors

    NASA Technical Reports Server (NTRS)

    Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.

    1991-01-01

    The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.

  6. SARS-CoV replicates in primary human alveolar type II cell cultures but not in type I-like cells

    PubMed Central

    Mossel, Eric C.; Wang, Jieru; Jeffers, Scott; Edeen, Karen E.; Wang, Shuanglin; Cosgrove, Gregory P.; Funk, C. Joel; Manzer, Rizwan; Miura, Tanya A.; Pearson, Leonard D.; Holmes, Kathryn V.; Mason, Robert J.

    2008-01-01

    Severe acute respiratory syndrome (SARS) is a disease characterized by diffuse alveolar damage. We isolated alveolar type II cells and maintained them in a highly differentiated state. Type II cell cultures supported SARS-CoV replication as evidenced by RT-PCR detection of viral subgenomic RNA and an increase in virus titer. Virus titers were maximal by 24 hours and peaked at approximately 105 pfu/mL. Two cell types within the cultures were infected. One cell type was type II cells, which were positive for SP-A, SP-C, cytokeratin, a type II cell-specific monoclonal antibody, and Ep-CAM. The other cell type was composed of spindle-shaped cells that were positive for vimentin and collagen III and likely fibroblasts. Viral replication was not detected in type I-like cells or macrophages. Hence, differentiated adult human alveolar type II cells were infectible but alveolar type I-like cells and alveolar macrophages did not support productive infection. PMID:18022664

  7. Substrate water exchange in photosystem II depends on the peripheral proteins.

    PubMed

    Hillier, W; Hendry, G; Burnap, R L; Wydrzynski, T

    2001-12-14

    The (18)O exchange rates for the substrate water bound in the S(3) state were determined in different photosystem II sample types using time-resolved mass spectrometry. The samples included thylakoid membranes, salt-washed Triton X-100-prepared membrane fragments, and purified core complexes from spinach and cyanobacteria. For each sample type, two kinetically distinct isotopic exchange rates could be resolved, indicating that the biphasic exchange behavior for the substrate water is inherent to the O(2)-evolving catalytic site in the S(3) state. However, the fast phase of exchange became somewhat slower (by a factor of approximately 2) in NaCl-washed membrane fragments and core complexes from spinach in which the 16- and 23-kDa extrinsic proteins have been removed, compared with the corresponding rate for the intact samples. For CaCl(2)-washed membrane fragments in which the 33-kDa manganese stabilizing protein (MSP) has also been removed, the fast phase of exchange slowed down even further (by a factor of approximately 3). Interestingly, the slow phase of exchange was little affected in the samples from spinach. For core complexes prepared from Synechocystis PCC 6803 and Synechococcus elongatus, the fast and slow exchange rates were variously affected. Nevertheless, within the experimental error, nearly the same exchange rates were measured for thylakoid samples made from wild type and an MSP-lacking mutant of Synechocystis PCC 6803. This result could indicate that the MSP has a slightly different function in eukaryotic organisms compared with prokaryotic organisms. In all samples, however, the differences in the exchange rates are relatively small. Such small differences are unlikely to arise from major changes in the metal-ligand structure at the catalytic site. Rather, the observed differences may reflect subtle long range effects in which the exchange reaction coordinates become slightly altered. We discuss the results in terms of solvent penetration into photosystem II and the regional dielectric around the catalytic site.

  8. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  9. How common are cognitive errors in cases presented at emergency medicine resident morbidity and mortality conferences?

    PubMed

    Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett

    2018-06-20

    Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.

  10. Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria

    ERIC Educational Resources Information Center

    Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji

    2013-01-01

    The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…

  11. Error Types and Error Positions in Neglect Dyslexia: Comparative Analyses in Neglect Patients and Healthy Controls

    ERIC Educational Resources Information Center

    Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca

    2012-01-01

    Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to…

  12. Temporal-difference prediction errors and Pavlovian fear conditioning: role of NMDA and opioid receptors.

    PubMed

    Cole, Sindy; McNally, Gavan P

    2007-10-01

    Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  13. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  14. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  15. Alveolar type II cell-fibroblast interactions, synthesis and secretion of surfactant and type I collagen.

    PubMed

    Griffin, M; Bhandari, R; Hamilton, G; Chan, Y C; Powell, J T

    1993-06-01

    During alveolar development and alveolar repair close contacts are established between fibroblasts and lung epithelial cells through gaps in the basement membrane. Using co-culture systems we have investigated whether these close contacts influence synthesis and secretion of the principal surfactant apoprotein (SP-A) by cultured rat lung alveolar type II cells and the synthesis and secretion of type I collagen by fibroblasts. The alveolar type II cells remained cuboidal and grew in colonies on fibroblast feeder layers and on Matrigel-coated cell culture inserts but were progressively more flattened on fixed fibroblast monolayers and plastic. Alveolar type II cells cultured on plastic released almost all their SP-A into the medium by 4 days. Alveolar type II cells cultured on viable fibroblasts or Matrigel-coated inserts above fibroblasts accumulated SP-A in the medium at a constant rate for the first 4 days, and probably recycle SP-A by endocytosis. The amount of mRNA for SP-A was very low after 4 days of culture of alveolar type II cells on plastic, Matrigel-coated inserts or fixed fibroblast monolayers: relatively, the amount of mRNA for SP-A was increased 4-fold after culture of alveolar type II cells on viable fibroblasts. Co-culture of alveolar type II cells with confluent human dermal fibroblasts stimulated by 2- to 3-fold the secretion of collagen type I into the culture medium, even after the fibroblasts' growth had been arrested with mitomycin C. Collagen secretion, by fibroblasts, also was stimulated 2-fold by conditioned medium from alveolar type II cells cultured on Matrigel. The amount of mRNA for type I collagen increased only modestly when fibroblasts were cultured in this conditioned medium. This stimulation of type I collagen secretion diminished as the conditioned medium was diluted out, but at high dilutions further stimulation occurred, indicating that a factor that inhibited collagen secretion also was being diluted out. The conditioned medium contained low levels of IGF-1 and the stimulation of type I collagen secretion was abolished when the conditioned medium was pre-incubated with antibodies to insulin-like growth factor 1 (IGF-1). There are important reciprocal interactions between alveolar type II cells and fibroblasts in co-culture. Direct contacts between alveolar type II cells and fibroblasts appear to have a trophic effect on cultured alveolar type II cells, increasing the levels of mRNA for SP-A. Rat lung alveolar type II cells appear to release a factor (possibly IGF-1) that stimulates type I collagen secretion by fibroblasts.

  16. Excitonic transitions in highly efficient (GaIn)As/Ga(AsSb) type-II quantum-well structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gies, S.; Kruska, C.; Berger, C.

    2015-11-02

    The excitonic transitions of the type-II (GaIn)As/Ga(AsSb) gain medium of a “W”-laser structure are characterized experimentally by modulation spectroscopy and analyzed using microscopic quantum theory. On the basis of the very good agreement between the measured and calculated photoreflectivity, the type-I or type-II character of the observable excitonic transitions is identified. Whereas the energetically lowest three transitions exhibit type-II character, the subsequent energetically higher transitions possess type-I character with much stronger dipole moments. Despite the type-II character, the quantum-well structure exhibits a bright luminescence.

  17. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    PubMed

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  19. Identifying disease polymorphisms from case-control genetic association data.

    PubMed

    Park, L

    2010-12-01

    In case-control association studies, it is typical to observe several associated polymorphisms in a gene region. Often the most significantly associated polymorphism is considered to be the disease polymorphism; however, it is not clear whether it is the disease polymorphism or there is more than one disease polymorphism in the gene region. Currently, there is no method that can handle these problems based on the linkage disequilibrium (LD) relationship between polymorphisms. To distinguish real disease polymorphisms from markers in LD, a method that can detect disease polymorphisms in a gene region has been developed. Relying on the LD between polymorphisms in controls, the proposed method utilizes model-based likelihood ratio tests to find disease polymorphisms. This method shows reliable Type I and Type II error rates when sample sizes are large enough, and works better with re-sequenced data. Applying this method to fine mapping using re-sequencing or dense genotyping data would provide important information regarding the genetic architecture of complex traits.

  20. Numeric and morphological verification of lumbosacral segments in 8280 consecutive patients.

    PubMed

    Paik, Nam Chull; Lim, Chun Soo; Jang, Ho Suk

    2013-05-01

    An analysis of imaging data. To investigate concurrent numeric and morphological variations of presacral vertebrae and to propose a modified designation for the lumbosacral transitional vertebra (LSTV). During the assessment of the lumbosacral vertebra, variations from typical anatomy (numeric, morphological, or both) may confuse the practitioner, potentially leading to significant clinical errors. Common practice, which involves counting cephalad from the presumed fifth lumbar vertebra, may result in inaccurate localization of lumbosacral levels. The study group was composed of 8280 consecutive patients who underwent both lumbar magnetic resonance imaging with cervicothoracic scanning and lumbar radiographical examinations. The presacral vertebral number was verified by counting caudally from C2, with cross-referencing of cervicothoracic and lumbar sagittal scans on a picture archiving and communication system workstation. After correlating the numbering on the magnetic resonance images with those on the radiographs, the lumbosacral junction was classified according to the Castellvi's method. Of the 8280 consecutive patients, 214 (2.6%) had 4 lumbar vertebrae (L4), 7384 (89.2%) had 5 lumbar vertebrae (L5), and 682 (8.2%) had 6 lumbar vertebrae (L6). Overall, 877 (10.6%) patients had LSTV of types II, III, or IV, including 439 (5.3%) with sacralized L5 vertebra and 438 (5.3%) with lumbarized S1 vertebra. The most common LSTV was L5-type vertebra with a unilateral type II transition, designated as L5IIa, in 222 (2.7%) patients. The second most common LSTV was L6-type vertebra with a bilateral type III transition in 174 (2.1%) patients that was designated as L6IIIb. Only 6945 (83.9%) of the population were modal type, with 5 lumbar vertebrae without transitional vertebra. All the 214 (2.6%) L4-type and 244 (2.9%) of the 682 L6-type patients presented with no transitional vertebra, looking like a modal L5-type patient. Spine physicians and radiologists should consider the possibility of both numeric and morphological variations when evaluating lumbosacral spine images.

  1. Whole-brain MRI phenotyping in dysplasia-related frontal lobe epilepsy.

    PubMed

    Hong, Seok-Jun; Bernhardt, Boris C; Schrader, Dewi S; Bernasconi, Neda; Bernasconi, Andrea

    2016-02-16

    To perform whole-brain morphometry in patients with frontal lobe epilepsy and evaluate the utility of group-level patterns for individualized diagnosis and prognosis. We compared MRI-based cortical thickness and folding complexity between 2 frontal lobe epilepsy cohorts with histologically verified focal cortical dysplasia (FCD) (13 type I; 28 type II) and 41 closely matched controls. Pattern learning algorithms evaluated the utility of group-level findings to predict histologic FCD subtype, the side of the seizure focus, and postsurgical seizure outcome in single individuals. Relative to controls, FCD type I displayed multilobar cortical thinning that was most marked in ipsilateral frontal cortices. Conversely, type II showed thickening in temporal and postcentral cortices. Cortical folding also diverged, with increased complexity in prefrontal cortices in type I and decreases in type II. Group-level findings successfully guided automated FCD subtype classification (type I: 100%; type II: 96%), seizure focus lateralization (type I: 92%; type II: 86%), and outcome prediction (type I: 92%; type II: 82%). FCD subtypes relate to diverse whole-brain structural phenotypes. While cortical thickening in type II may indicate delayed pruning, a thin cortex in type I likely results from combined effects of seizure excitotoxicity and the primary malformation. Group-level patterns have a high translational value in guiding individualized diagnostics. © 2016 American Academy of Neurology.

  2. Type II endoleak after endovascular abdominal aortic aneurysm repair: a conservative approach with selective intervention is safe and cost-effective.

    PubMed

    Steinmetz, Eric; Rubin, Brian G; Sanchez, Luis A; Choi, Eric T; Geraghty, Patrick J; Baty, Jack; Thompson, Robert W; Flye, M Wayne; Hovsepian, David M; Picus, Daniel; Sicard, Gregorio A

    2004-02-01

    The conservative versus therapeutic approach to type II endoleak after endovascular repair of abdominal aortic aneurysm (EVAR) has been controversial. The purpose of this study was to evaluate the safety and cost-effectiveness of the conservative approach of embolizing type II endoleak only when persistent for more than 6 months and associated with aneurysm sac growth of 5 mm or more. Data for 486 consecutive patients who underwent EVAR were analyzed for incidence and outcome of type II endoleaks. Spiral computed tomography (CT) scans were reviewed, and patient outcome was evaluated at either office visit or telephone contact. Patients with new or late-appearing type II endoleak were evaluated with spiral CT at 6-month intervals to evaluate both persistence of the endoleak and size of the aneurysm sac. Persistent (>or=6 months) type II endoleak and aneurysm sac growth of 5 mm or greater were treated with either translumbar glue or coil embolization of the lumbar source, or transarterial coil embolization of the inferior mesenteric artery. Type II endoleaks were detected in 90 (18.5%) patients. With a mean follow-up of 21.7 +/- 16 months, only 35 (7.2%) patients had type II endoleak that persisted for 6 months or longer. Aneurysm sac enlargement was noted in 5 patients, representing 1% of the total series. All 5 patients underwent successful translumbar sac embolization (n = 4) or transarterial inferior mesenteric artery embolization (n = 4) at a mean follow-up of 18.2 +/- 8.0 months, with no recurrence or aneurysm sac growth. No patient with treated or untreated type II endoleak has had rupture of the aneurysm. The mean global cost for treatment of persistent type II endoleak associated with aneurysm sac growth was US dollars 6695.50 (hospital cost plus physician reimbursement). Treatment in the 30 patients with persistent type II endoleak but no aneurysm sac growth would have represented an additional cost of US dollars 200000 or more. The presence or absence of a type II endoleak did not affect survival (78% vs 73%) at 48 months. Selective intervention to treat type II endoleak that persists for 6 months and is associated with aneurysm enlargement seems to be both safe and cost-effective. Longer follow-up will determine whether this conservative approach to management of type II endoleak is the standard of care.

  3. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  4. Biosorption of cobalt(II) with sunflower biomass from aqueous solutions in a fixed bed column and neural networks modelling.

    PubMed

    Oguz, Ensar; Ersoy, Muhammed

    2014-01-01

    The effects of inlet cobalt(II) concentration (20-60 ppm), feed flow rate (8-19 ml/min) and bed height (5-15 cm), initial solution pH (3-5) and particle size (0.25

  5. Multiple imputation of missing fMRI data in whole brain analysis

    PubMed Central

    Vaden, Kenneth I.; Gebregziabher, Mulugeta; Kuchinsky, Stefanie E.; Eckert, Mark A.

    2012-01-01

    Whole brain fMRI analyses rarely include the entire brain because of missing data that result from data acquisition limits and susceptibility artifact, in particular. This missing data problem is typically addressed by omitting voxels from analysis, which may exclude brain regions that are of theoretical interest and increase the potential for Type II error at cortical boundaries or Type I error when spatial thresholds are used to establish significance. Imputation could significantly expand statistical map coverage, increase power, and enhance interpretations of fMRI results. We examined multiple imputation for group level analyses of missing fMRI data using methods that leverage the spatial information in fMRI datasets for both real and simulated data. Available case analysis, neighbor replacement, and regression based imputation approaches were compared in a general linear model framework to determine the extent to which these methods quantitatively (effect size) and qualitatively (spatial coverage) increased the sensitivity of group analyses. In both real and simulated data analysis, multiple imputation provided 1) variance that was most similar to estimates for voxels with no missing data, 2) fewer false positive errors in comparison to mean replacement, and 3) fewer false negative errors in comparison to available case analysis. Compared to the standard analysis approach of omitting voxels with missing data, imputation methods increased brain coverage in this study by 35% (from 33,323 to 45,071 voxels). In addition, multiple imputation increased the size of significant clusters by 58% and number of significant clusters across statistical thresholds, compared to the standard voxel omission approach. While neighbor replacement produced similar results, we recommend multiple imputation because it uses an informed sampling distribution to deal with missing data across subjects that can include neighbor values and other predictors. Multiple imputation is anticipated to be particularly useful for 1) large fMRI data sets with inconsistent missing voxels across subjects and 2) addressing the problem of increased artifact at ultra-high field, which significantly limit the extent of whole brain coverage and interpretations of results. PMID:22500925

  6. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  7. Characteristics of interplanetary type II radio emission and the relationship to shock and plasma properties

    NASA Technical Reports Server (NTRS)

    Lengyel-Frey, D.; Stone, R. G.

    1989-01-01

    A large sample of type II events is the basis of the present study of the properties of interplanetary type II bursts' radio-emission properties. Type II spectra seem to be composed of fundamental and harmonic components of plasma emission, where the intensity of the fundamental component increases relative to the harmonic as the burst evolves with heliocentric distance; burst average flux density increases as a power of the associated shock's average velocity. Solar wind density structures may have a significant influence on type II bandwidths.

  8. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  9. An ecological analysis of food outlet density and prevalence of type II diabetes in South Carolina counties.

    PubMed

    AlHasan, Dana M; Eberth, Jan Marie

    2016-01-05

    Studies suggest that the built environment with high numbers of fast food restaurants and convenience stores and low numbers of super stores and grocery stores are related to obesity, type II diabetes mellitus, and other chronic diseases. Since few studies assess these relationships at the county level, we aim to examine fast food restaurant density, convenience store density, super store density, and grocery store density and prevalence of type II diabetes among counties in South Carolina. Pearson's correlation between four types of food outlet densities- fast food restaurants, convenience stores, super stores, and grocery stores- and prevalence of type II diabetes were computed. The relationship between each of these food outlet densities were mapped with prevalence of type II diabetes, and OLS regression analysis was completed adjusting for county-level rates of obesity, physical inactivity, density of recreation facilities, unemployment, households with no car and limited access to stores, education, and race. We showed a significant, negative relationship between fast food restaurant density and prevalence of type II diabetes, and a significant, positive relationship between convenience store density and prevalence of type II diabetes. In adjusted analysis, the food outlet densities (of any type) was not associated with prevalence of type II diabetes. This ecological analysis showed no associations between fast food restaurants, convenience stores, super stores, or grocery stores densities and the prevalence of type II diabetes. Consideration of environmental, social, and cultural determinants, as well as individual behaviors is needed in future research.

  10. Errors in accident data, its types, causes and methods of rectification-analysis of the literature.

    PubMed

    Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri

    2017-07-29

    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the variables in the categories of location, victim's information, vehicle's information, and environment was 27%, 37%, 16% and 19% respectively. Among the causes identified for errors in accident data reporting, Policing System was found to be the most important. Overall 26 causes of errors in accident data were discussed out of which 12 were related to reporting and 14 were related to recording. "Capture-Recapture" was the most widely used method among the 11 different methods: that can be used for the rectification of under-reporting. There were 12 studies pertinent to the rectification of accident location and almost all of them utilised a Geographical Information System (GIS) platform coupled with a matching algorithm to estimate the correct location. It is recommended that the policing system should be reformed and public awareness should be created to help reduce errors in accident data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Acoustic Type-II Weyl Nodes from Stacking Dimerized Chains

    NASA Astrophysics Data System (ADS)

    Yang, Zhaoju; Zhang, Baile

    2016-11-01

    Lorentz-violating type-II Weyl fermions, which were missed in Weyl's prediction of nowadays classified type-I Weyl fermions in quantum field theory, have recently been proposed in condensed matter systems. The semimetals hosting type-II Weyl fermions offer a rare platform for realizing many exotic physical phenomena that are different from type-I Weyl systems. Here we construct the acoustic version of a type-II Weyl Hamiltonian by stacking one-dimensional dimerized chains of acoustic resonators. This acoustic type-II Weyl system exhibits distinct features in a finite density of states and unique transport properties of Fermi-arc-like surface states. In a certain momentum space direction, the velocity of these surface states is determined by the tilting direction of the type-II Weyl nodes rather than the chirality dictated by the Chern number. Our study also provides an approach of constructing acoustic topological phases at different dimensions with the same building blocks.

  12. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  13. Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II

    PubMed Central

    Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.

    2009-01-01

    We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512

  14. Mimicry by asx- and ST-turns of the four main types of β-turn in proteins

    PubMed Central

    Duddy, William J.; Nissink, J. Willem M.; Allen, Frank H.; Milner-White, E. James

    2004-01-01

    Hydrogen-bonded β-turns in proteins occur in four categories: type I (the most common), type II, type II’, and type I’. Asx-turns resemble β-turns, in that both have an NH. . .OC hydrogen bond forming a ring of 10 atoms. Serine and threonine side chains also commonly form hydrogen-bonded turns, here called ST-turns. Asx-turns and ST-turns can be categorized into four classes, based on side chain rotamers and the conformation of the central turn residue, which are geometrically equivalent to the four types of β-turns. We propose asx- and ST-turns be named using the type I, II, I’, and II’ β-turn nomenclature. Using this, the frequency of occurrence of both asx- and ST-turns is: type II’ > type I > type II > type I’, whereas for β-turns it is type I > type II > type I’ > type II’. Almost all type II asx-turns occur as a recently described three residue feature named an asx-nest. PMID:15459339

  15. European Scientific Notes, Volume 38, Number 9.

    DTIC Science & Technology

    1984-09-01

    dropped automa- tically from the mailing list. RSN Invites Letters to the Editor ESN publishes selected letters related to developments and policy in... selective sunmmary can be extract- examine trait anxiety or state-trait ed from the Idzikowski-Baddeley litera- interactions. ture review; it appears in... mutism , and stupor are not seen in fliers as they are in ground soldiers. Reid 1945 WW II - Navigation Errors increased over enemy bomber errors coast

  16. Judgment of Line Orientation Depends on Gender, Education, and Type of Error

    ERIC Educational Resources Information Center

    Caparelli-Daquer, Egas M.; Oliveira-Souza, Ricardo; Filho, Pedro F. Moreira

    2009-01-01

    Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of…

  17. Influence of Lexical Factors on Word-Finding Accuracy, Error Patterns, and Substitution Types

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; German, Diane J.; Jagielko, Jennifer R.

    2018-01-01

    This retrospective, exploratory investigation examined the types of target words that 66 children with/without word-finding difficulties (WFD) had difficulty naming, and the types of errors they made. Words were studied with reference to lexical factors (LFs) that might influence naming performance: word frequency, familiarity, length, phonotactic…

  18. Addressing potential local adaptation in species distribution models: implications for conservation under climate change

    USGS Publications Warehouse

    Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason D. K.; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C.; Hellmann, Jessica J.

    2016-01-01

    Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs to treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account, may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted, however. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate MaxEnt models, one considering the species as a single population and two of disjunct populations. PCA analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species versus population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered.

  19. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    PubMed

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  20. Filtering Photogrammetric Point Clouds Using Standard LIDAR Filters Towards DTM Generation

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Gerke, M.; Vosselman, G.; Yang, M. Y.

    2018-05-01

    Digital Terrain Models (DTMs) can be generated from point clouds acquired by laser scanning or photogrammetric dense matching. During the last two decades, much effort has been paid to developing robust filtering algorithms for the airborne laser scanning (ALS) data. With the point cloud quality from dense image matching (DIM) getting better and better, the research question that arises is whether those standard Lidar filters can be used to filter photogrammetric point clouds as well. Experiments are implemented to filter two dense matching point clouds with different noise levels. Results show that the standard Lidar filter is robust to random noise. However, artefacts and blunders in the DIM points often appear due to low contrast or poor texture in the images. Filtering will be erroneous in these locations. Filtering the DIM points pre-processed by a ranking filter will bring higher Type II error (i.e. non-ground points actually labelled as ground points) but much lower Type I error (i.e. bare ground points labelled as non-ground points). Finally, the potential DTM accuracy that can be achieved by DIM points is evaluated. Two DIM point clouds derived by Pix4Dmapper and SURE are compared. On grassland dense matching generates points higher than the true terrain surface, which will result in incorrectly elevated DTMs. The application of the ranking filter leads to a reduced bias in the DTM height, but a slightly increased noise level.

  1. Novel linkage disequilibrium clustering algorithm identifies new lupus genes on meta-analysis of GWAS datasets.

    PubMed

    Saeed, Mohammad

    2017-05-01

    Systemic lupus erythematosus (SLE) is a complex disorder. Genetic association studies of complex disorders suffer from the following three major issues: phenotypic heterogeneity, false positive (type I error), and false negative (type II error) results. Hence, genes with low to moderate effects are missed in standard analyses, especially after statistical corrections. OASIS is a novel linkage disequilibrium clustering algorithm that can potentially address false positives and negatives in genome-wide association studies (GWAS) of complex disorders such as SLE. OASIS was applied to two SLE dbGAP GWAS datasets (6077 subjects; ∼0.75 million single-nucleotide polymorphisms). OASIS identified three known SLE genes viz. IFIH1, TNIP1, and CD44, not previously reported using these GWAS datasets. In addition, 22 novel loci for SLE were identified and the 5 SLE genes previously reported using these datasets were verified. OASIS methodology was validated using single-variant replication and gene-based analysis with GATES. This led to the verification of 60% of OASIS loci. New SLE genes that OASIS identified and were further verified include TNFAIP6, DNAJB3, TTF1, GRIN2B, MON2, LATS2, SNX6, RBFOX1, NCOA3, and CHAF1B. This study presents the OASIS algorithm, software, and the meta-analyses of two publicly available SLE GWAS datasets along with the novel SLE genes. Hence, OASIS is a novel linkage disequilibrium clustering method that can be universally applied to existing GWAS datasets for the identification of new genes.

  2. Polymorphism of the serotonin transporter gene (5-HTTLPR) in major depressive disorder patients in Malaysia.

    PubMed

    Mohamed Saini, Suriati; Muhamad Radzi, Azizah; Abdul Rahman, Abdul Hamid

    2012-06-01

    The serotonin transporter promoter (5-HTTLPR) is a potential susceptibility locus in the pathogenesis of major depressive disorder. However, data from Malaysia is lacking. The present study aimed to determine the association between the homozygous short variant of the serotonin transporter promoter gene (5-HTTLPR) with major depressive disorder. This is a candidate gene case-control association study. The sample consists of 55 major depressive disorder probands and 66 controls. They were Malaysian descents and were unrelated. The Axis I diagnosis was determined using Mini International Neuropsychiatric Interview (M.I.N.I.). The control group comprised healthy volunteers without personal psychiatric history and family history of mood disorders. Participants' blood was sent to the Institute Medical Research for genotyping. The present study failed to detect an association between 5-HTTLPR ss genotype with major depressive disorder (χ(2)  = 3.67, d.f. = 1, P = 0.055, odds ratio 0.25, 95% confidence interval = 0.07-1.94). Sub-analysis revealed that the frequency of l allele in healthy controls was higher (78.0%) than that of Caucasian and East Asian population. However, in view of the small sample size this study may be prone to type II error (and type I error). This preliminary study suggests that the homozygous short variant of the 5-HTTLPR did not appear to be a risk factor for increasing susceptibility to major depressive disorder. Copyright © 2012 Blackwell Publishing Asia Pty Ltd.

  3. THE VALIDITY OF USING ROC SOFTWARE FOR ANALYSING VISUAL GRADING CHARACTERISTICS DATA: AN INVESTIGATION BASED ON THE NOVEL SOFTWARE VGC ANALYZER.

    PubMed

    Hansson, Jonny; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The purpose of the present work was to investigate the validity of using single-reader-adapted receiver operating characteristics (ROC) software for analysis of visual grading characteristics (VGC) data. VGC data from four published VGC studies on optimisation of X-ray examinations, previously analysed using ROCFIT, were reanalysed using a recently developed software dedicated to VGC analysis (VGC Analyzer), and the outcomes [the mean and 95 % confidence interval (CI) of the area under the VGC curve (AUCVGC) and the p-value] were compared. The studies included both paired and non-paired data and were reanalysed both for the fixed-reader and the random-reader situations. The results showed good agreement between the softwares for the mean AUCVGC For non-paired data, wider CIs were obtained with VGC Analyzer than previously reported, whereas for paired data, the previously reported CIs were similar or even broader. Similar observations were made for the p-values. The results indicate that the use of single-reader-adapted ROC software such as ROCFIT for analysing non-paired VGC data may lead to an increased risk of committing Type I errors, especially in the random-reader situation. On the other hand, the use of ROC software for analysis of paired VGC data may lead to an increased risk of committing Type II errors, especially in the fixed-reader situation. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  5. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  6. Analysis of focusing error signals by differential astigmatic method under off-center tracking in the land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko

    2015-04-01

    We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.

  7. Risk assessment considerations with regard to the potential impacts of pesticides on endangered species.

    PubMed

    Brain, Richard A; Teed, R Scott; Bang, JiSu; Thorbek, Pernille; Perine, Jeff; Peranginangin, Natalia; Kim, Myoungwoo; Valenti, Ted; Chen, Wenlin; Breton, Roger L; Rodney, Sara I; Moore, Dwayne R J

    2015-01-01

    Simple, deterministic screening-level assessments that are highly conservative by design facilitate a rapid initial screening to determine whether a pesticide active ingredient has the potential to adversely affect threatened or endangered species. If a worst-case estimate of pesticide exposure is below a very conservative effects metric (e.g., the no observed effects concentration of the most sensitive tested surrogate species) then the potential risks are considered de minimis and unlikely to jeopardize the existence of a threatened or endangered species. Thus by design, such compounded layers of conservatism are intended to minimize potential Type II errors (failure to reject a false null hypothesis of de minimus risk), but correspondingly increase Type I errors (falsely reject a null hypothesis of de minimus risk). Because of the conservatism inherent in screening-level risk assessments, higher-tier scientific information and analyses that provide additional environmental realism can be applied in cases where a potential risk has been identified. This information includes community-level effects data, environmental fate and exposure data, monitoring data, geospatial location and proximity data, species biology data, and probabilistic exposure and population models. Given that the definition of "risk" includes likelihood and magnitude of effect, higher-tier risk assessments should use probabilistic techniques that more accurately and realistically characterize risk. Moreover, where possible and appropriate, risk assessments should focus on effects at the population and community levels of organization rather than the more traditional focus on the organism level. This document provides a review of some types of higher-tier data and assessment refinements available to more accurately and realistically evaluate potential risks of pesticide use to threatened and endangered species. © 2014 SETAC.

  8. Signature-forecasting and early outbreak detection system

    PubMed Central

    Naumova, Elena N.; MacNeill, Ian B.

    2008-01-01

    SUMMARY Daily disease monitoring via a public health surveillance system provides valuable information on population risks. Efficient statistical tools for early detection of rapid changes in the disease incidence are a must for modern surveillance. The need for statistical tools for early detection of outbreaks that are not based on historical information is apparent. A system is discussed for monitoring cases of infections with a view to early detection of outbreaks and to forecasting the extent of detected outbreaks. We propose a set of adaptive algorithms for early outbreak detection that does not rely on extensive historical recording. We also include knowledge of infection disease epidemiology into forecasts. To demonstrate this system we use data from the largest water-borne outbreak of cryptosporidiosis, which occurred in Milwaukee in 1993. Historical data are smoothed using a loess-type smoother. Upon receipt of a new datum, the smoothing is updated and estimates are made of the first two derivatives of the smooth curve, and these are used for near-term forecasting. Recent data and the near-term forecasts are used to compute a color-coded warning index, which quantify the level of concern. The algorithms for computing the warning index have been designed to balance Type I errors (false prediction of an epidemic) and Type II errors (failure to correctly predict an epidemic). If the warning index signals a sufficiently high probability of an epidemic, then a forecast of the possible size of the outbreak is made. This longer term forecast is made by fitting a ‘signature’ curve to the available data. The effectiveness of the forecast depends upon the extent to which the signature curve captures the shape of outbreaks of the infection under consideration. PMID:18716671

  9. Hydrologic characterization of desert soils with varying degrees of pedogenesis: 2. Inverse modeling for eff ective properties

    USGS Publications Warehouse

    Mirus, B.B.; Perkins, K.S.; Nimmo, J.R.; Singha, K.

    2009-01-01

    To understand their relation to pedogenic development, soil hydraulic properties in the Mojave Desert were investi- gated for three deposit types: (i) recently deposited sediments in an active wash, (ii) a soil of early Holocene age, and (iii) a highly developed soil of late Pleistocene age. Eff ective parameter values were estimated for a simplifi ed model based on Richards' equation using a fl ow simulator (VS2D), an inverse algorithm (UCODE-2005), and matric pressure and water content data from three ponded infi ltration experiments. The inverse problem framework was designed to account for the eff ects of subsurface lateral spreading of infi ltrated water. Although none of the inverse problems converged on a unique, best-fi t parameter set, a minimum standard error of regression was reached for each deposit type. Parameter sets from the numerous inversions that reached the minimum error were used to develop probability distribu tions for each parameter and deposit type. Electrical resistance imaging obtained for two of the three infi ltration experiments was used to independently test fl ow model performance. Simulations for the active wash and Holocene soil successfully depicted the lateral and vertical fl uxes. Simulations of the more pedogenically developed Pleistocene soil did not adequately replicate the observed fl ow processes, which would require a more complex conceptual model to include smaller scale heterogeneities. The inverse-modeling results, however, indicate that with increasing age, the steep slope of the soil water retention curve shitis toward more negative matric pressures. Assigning eff ective soil hydraulic properties based on soil age provides a promising framework for future development of regional-scale models of soil moisture dynamics in arid environments for land-management applications. ?? Soil Science Society of America.

  10. Reduction in chemotherapy order errors with computerized physician order entry.

    PubMed

    Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J

    2014-01-01

    To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.

  11. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  12. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    PubMed

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  13. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  15. Localization of Usher syndrome type II to chromosome 1q.

    PubMed

    Kimberling, W J; Weston, M D; Möller, C; Davenport, S L; Shugart, Y Y; Priluck, I A; Martini, A; Milani, M; Smith, R J

    1990-06-01

    Usher syndrome is characterized by congenital hearing loss, progressive visual impairment due to retinitis pigmentosa, and variable vestibular problems. The two subtypes of Usher syndrome, types I and II, can be distinguished by the degree of hearing loss and by the presence or absence of vestibular dysfunction. Type I is characterized by a profound hearing loss and totally absent vestibular responses, while type II has a milder hearing loss and normal vestibular function. Fifty-five members of eight type II Usher syndrome families were typed for three DNA markers in the distal region of chromosome 1q: D1S65 (pEKH7.4), REN (pHRnES1.9), and D1S81 (pTHH33). Statistically significant linkage was observed for Usher syndrome type II with a maximum multipoint lod score of 6.37 at the position of the marker THH33, thus localizing the Usher type II (USH2) gene to 1q. Nine families with type I Usher syndrome failed to show linkage to the same three markers. The statistical test for heterogeneity of linkage between Usher syndrome types I and II was highly significant, thus demonstrating that they are due to mutations at different genetic loci.

  16. 75 FR 43153 - Procurement List Proposed Additions and Deletions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-23

    ...- LRG XX-LONG 8405-00-NIB-0442--Type II Blouse, Women's, Navy Work Uniform 32 X-SHORT 8405-00-NIB-0443--Type II Blouse, Women's, Navy Work Uniform 32 SHORT 8405-00-NIB-0444--Type II Blouse, Women's, Navy Work Uniform 35 X-SHORT 8405-00-NIB-0445--Type II Blouse, Women's, Navy Work Uniform 35 SHORT 8405-00...

  17. Refractive errors in patients with newly diagnosed diabetes mellitus.

    PubMed

    Yarbağ, Abdülhekim; Yazar, Hayrullah; Akdoğan, Mehmet; Pekgör, Ahmet; Kaleli, Suleyman

    2015-01-01

    Diabetes mellitus is a complex metabolic disorder that involves the small blood vessels, often causing widespread damage to tissues, including the eyes' optic refractive error. In patients with newly diagnosed diabetes mellitus who have unstable blood glucose levels, refraction may be incorrect. We aimed to investigate refraction in patients who were recently diagnosed with diabetes and treated at our centre. This prospective study was performed from February 2013 to January 2014. Patients were diagnosed with diabetes mellitus using laboratory biochemical tests and clinical examination. Venous fasting plasma glucose (fpg) levels were measured along with refractive errors. Two measurements were taken: initially and after four weeks. The last difference between the initial and end refractive measurements were evaluated. Our patients were 100 males and 30 females who had been newly diagnosed with type II DM. The refractive and fpg levels were measured twice in all patients. The average values of the initial measurements were as follows: fpg level, 415 mg/dl; average refractive value, +2.5 D (Dioptres). The average end of period measurements were fpg, 203 mg/dl; average refractive value, +0.75 D. There is a statistically significant difference between after four weeks measurements with initially measurements of fasting plasma glucose (fpg) levels (p<0.05) and there is a statistically significant relationship between changes in fpg changes with glasses ID (p<0.05) and the disappearance of blurred vision (to be greater than 50% success rate) were statistically significant (p<0.05). Also, were detected upon all these results the absence of any age and sex effects (p>0.05). Refractive error is affected in patients with newly diagnosed diabetes mellitus; therefore, plasma glucose levels should be considered in the selection of glasses.

  18. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  19. Effect of the menstrual cycle on voice quality.

    PubMed

    Silverman, E M; Zimmer, C H

    1978-01-01

    The question addressed was whether most young women with no vocal training exhibit premenstrual hoarseness. Spectral (acoustical) analyses of the sustained productions of three vowels produced by 20 undergraduates at and at premenstruation were rated for degree of hoarseness. Statistical analysis of the data indicated that the typical subject was no more hoarse of premenstruation than at ovulation. To determine whether this finding represented a genuine characteristic of women's voices or a type II statistical error, a systematic replication was undertaken with another sample of 27 undergraduates. The finding replicated that of the original investigation, suggesting that premenstrual hoarseness is a rarely occurring condition among young women with no vocal training. The apparent differential effect of the menstrual cycle on trained as opposed to untrained voices deserves systematic investigation.

  20. Leveraging business intelligence to make better decisions: Part II.

    PubMed

    Reimers, Mona

    2014-01-01

    This article is the second in a series about business intelligence (BI) in a medical practice. The first article reviewed the evolution of data reporting within the industry and provided some examples of how BI concepts differ from the reports available in the menus of our software systems, or the dashboards and scorecards practices have implemented. This article will discuss how to begin a BI initiative for front-end medical practice staffers that will create tools they can use to reduce errors and increase efficiency throughout their workday. This type of BI rollout can allow practices to get started with very little financial investment, gain enthusiasm from end users, and achieve a quick return on investment. More examples of successful BI projects in medical practices are discussed to help illustrate BI concepts.

  1. Aliasing errors in measurements of beam position and ellipticity

    NASA Astrophysics Data System (ADS)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  2. The effect of spectral filters on visual search in stroke patients.

    PubMed

    Beasley, Ian G; Davies, Leon N

    2013-01-01

    Visual search impairment can occur following stroke. The utility of optimal spectral filters on visual search in stroke patients has not been considered to date. The present study measured the effect of optimal spectral filters on visual search response time and accuracy, using a task requiring serial processing. A stroke and control cohort undertook the task three times: (i) using an optimally selected spectral filter; (ii) the subjects were randomly assigned to two groups with group 1 using an optimal filter for two weeks, whereas group 2 used a grey filter for two weeks; (iii) the groups were crossed over with group 1 using a grey filter for a further two weeks and group 2 given an optimal filter, before undertaking the task for the final time. Initial use of an optimal spectral filter improved visual search response time but not error scores in the stroke cohort. Prolonged use of neither an optimal nor a grey filter improved response time or reduced error scores. In fact, response times increased with the filter, regardless of its type, for stroke and control subjects; this outcome may be due to contrast reduction or a reflection of task design, given that significant practice effects were noted.

  3. Choline Deficiency Causes Colonic Type II Natural Killer T (NKT) Cell Loss and Alleviates Murine Colitis under Type I NKT Cell Deficiency

    PubMed Central

    Sagami, Shintaro; Ueno, Yoshitaka; Tanaka, Shinji; Fujita, Akira; Niitsu, Hiroaki; Hayashi, Ryohei; Hyogo, Hideyuki; Hinoi, Takao; Kitadai, Yasuhiko; Chayama, Kazuaki

    2017-01-01

    Serum levels of choline and its derivatives are lower in patients with inflammatory bowel disease (IBD) than in healthy individuals. However, the effect of choline deficiency on the severity of colitis has not been investigated. In the present study, we investigated the role of choline deficiency in dextran sulfate sodium (DSS)-induced colitis in mice. Methionine-choline-deficient (MCD) diet lowered the levels of type II natural killer T (NKT) cells in the colonic lamina propria, peritoneal cavity, and mesenteric lymph nodes, and increased the levels of type II NKT cells in the livers of wild-type B6 mice compared with that in mice fed a control (CTR) diet. The gene expression pattern of the chemokine receptor CXCR6, which promotes NKT cell accumulation, varied between colon and liver in a manner dependent on the changes in the type II NKT cell levels. To examine the role of type II NKT cells in colitis under choline-deficient conditions, we assessed the severity of DSS-induced colitis in type I NKT cell-deficient (Jα18-/-) or type I and type II NKT cell-deficient (CD1d-/-) mice fed the MCD or CTR diets. The MCD diet led to amelioration of inflammation, decreases in interferon (IFN)-γ and interleukin (IL)-4 secretion, and a decrease in the number of IFN-γ and IL-4-producing NKT cells in Jα18-/- mice but not in CD1d-/- mice. Finally, adaptive transfer of lymphocytes with type II NKT cells exacerbated DSS-induced colitis in Jα18-/- mice with MCD diet. These results suggest that choline deficiency causes proinflammatory type II NKT cell loss and alleviates DSS-induced colitis. Thus, inflammation in DSS-induced colitis under choline deficiency is caused by type II NKT cell-dependent mechanisms, including decreased type II NKT cell and proinflammatory cytokine levels. PMID:28095507

  4. Choline Deficiency Causes Colonic Type II Natural Killer T (NKT) Cell Loss and Alleviates Murine Colitis under Type I NKT Cell Deficiency.

    PubMed

    Sagami, Shintaro; Ueno, Yoshitaka; Tanaka, Shinji; Fujita, Akira; Niitsu, Hiroaki; Hayashi, Ryohei; Hyogo, Hideyuki; Hinoi, Takao; Kitadai, Yasuhiko; Chayama, Kazuaki

    2017-01-01

    Serum levels of choline and its derivatives are lower in patients with inflammatory bowel disease (IBD) than in healthy individuals. However, the effect of choline deficiency on the severity of colitis has not been investigated. In the present study, we investigated the role of choline deficiency in dextran sulfate sodium (DSS)-induced colitis in mice. Methionine-choline-deficient (MCD) diet lowered the levels of type II natural killer T (NKT) cells in the colonic lamina propria, peritoneal cavity, and mesenteric lymph nodes, and increased the levels of type II NKT cells in the livers of wild-type B6 mice compared with that in mice fed a control (CTR) diet. The gene expression pattern of the chemokine receptor CXCR6, which promotes NKT cell accumulation, varied between colon and liver in a manner dependent on the changes in the type II NKT cell levels. To examine the role of type II NKT cells in colitis under choline-deficient conditions, we assessed the severity of DSS-induced colitis in type I NKT cell-deficient (Jα18-/-) or type I and type II NKT cell-deficient (CD1d-/-) mice fed the MCD or CTR diets. The MCD diet led to amelioration of inflammation, decreases in interferon (IFN)-γ and interleukin (IL)-4 secretion, and a decrease in the number of IFN-γ and IL-4-producing NKT cells in Jα18-/- mice but not in CD1d-/- mice. Finally, adaptive transfer of lymphocytes with type II NKT cells exacerbated DSS-induced colitis in Jα18-/- mice with MCD diet. These results suggest that choline deficiency causes proinflammatory type II NKT cell loss and alleviates DSS-induced colitis. Thus, inflammation in DSS-induced colitis under choline deficiency is caused by type II NKT cell-dependent mechanisms, including decreased type II NKT cell and proinflammatory cytokine levels.

  5. Method for transferring data from an unsecured computer to a secured computer

    DOEpatents

    Nilsen, Curt A.

    1997-01-01

    A method is described for transferring data from an unsecured computer to a secured computer. The method includes transmitting the data and then receiving the data. Next, the data is retransmitted and rereceived. Then, it is determined if errors were introduced when the data was transmitted by the unsecured computer or received by the secured computer. Similarly, it is determined if errors were introduced when the data was retransmitted by the unsecured computer or rereceived by the secured computer. A warning signal is emitted from a warning device coupled to the secured computer if (i) an error was introduced when the data was transmitted or received, and (ii) an error was introduced when the data was retransmitted or rereceived.

  6. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Spatial heterogeneity of type I error for local cluster detection tests

    PubMed Central

    2014-01-01

    Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343

  8. [Refractive errors in patients with cerebral palsy].

    PubMed

    Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr

    2013-04-01

    Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.

  9. The response to oestrogen deprivation of the cartilage collagen degradation marker, CTX-II, is unique compared with other markers of collagen turnover

    PubMed Central

    Bay-Jensen, Anne-Christine; Tabassi, Nadine CB; Sondergaard, Lene V; Andersen, Thomas L; Dagnaes-Hansen, Frederik; Garnero, Patrick; Kassem, Moustapha; Delaissé, Jean-Marie

    2009-01-01

    Introduction The urinary level of the type II collagen degradation marker CTX-II is increased in postmenopausal women and in ovariectomised rats, suggesting that oestrogen deprivation induces cartilage breakdown. Here we investigate whether this response to oestrogen is also true for other type II collagen turnover markers known to be affected in osteoarthritis, and whether it relates to its presence in specific areas of cartilage tissue. Methods The type II collagen degradation markers CTX-II and Helix-II were measured in the body fluids of premenopausal and postmenopausal women and in those of ovariectomised rats receiving oestrogen or not. Levels of PIIANP, a marker of type II collagen synthesis, were also measured in rats. Rat knee cartilage was analysed for immunoreactivity of CTX-II and PIIANP and for type II collagen expression. Results As expected, urinary levels of CTX-II are significantly increased in postmenopausal women and also in oestrogen-deprived rats, although only transiently. However, in neither case were these elevations paralleled by a significant increase of Helix-II levels and PIIANP levels did not change at any time. CTX-II immunoreactivity and collagen expression were detected in different cartilage areas. The upper zone is the area where CTX-II immunoreactivity and collagen expression best reflected the differences in urinary levels of CTX-II measured in response to oestrogen. However, correlations between urinary levels of CTX-II and tissue immunostainings in individual rats were not statistically significant. Conclusions We found only a small effect of oestrogen deprivation on cartilage. It was detected by CTX-II, but not by other type II collagen turnover markers typically affected in osteoarthritis. PMID:20527083

  10. Redesigning the type II' β-turn in green fluorescent protein to type I': implications for folding kinetics and stability.

    PubMed

    Madan, Bharat; Sokalingam, Sriram; Raghunathan, Govindan; Lee, Sun-Gu

    2014-10-01

    Both Type I' and Type II' β-turns have the same sense of the β-turn twist that is compatible with the β-sheet twist. They occur predominantly in two residue β-hairpins, but the occurrence of Type I' β-turns is two times higher than Type II' β-turns. This suggests that Type I' β-turns may be more stable than Type II' β-turns, and Type I' β-turn sequence and structure can be more favorable for protein folding than Type II' β-turns. Here, we redesigned the native Type II' β-turn in GFP to Type I' β-turn, and investigated its effect on protein folding and stability. The Type I' β-turns were designed based on the statistical analysis of residues in natural Type I' β-turns. The substitution of the native "GD" sequence of i+1 and i+2 residues with Type I' preferred "(N/D)G" sequence motif increased the folding rate by 50% and slightly improved the thermodynamic stability. Despite the enhancement of in vitro refolding kinetics and stability of the redesigned mutants, they showed poor soluble expression level compared to wild type. To overcome this problem, i and i + 3 residues of the designed Type I' β-turn were further engineered. The mutation of Thr to Lys at i + 3 could restore the in vivo soluble expression of the Type I' mutant. This study indicates that Type II' β-turns in natural β-hairpins can be further optimized by converting the sequence to Type I'. © 2014 Wiley Periodicals, Inc.

  11. Integrating photo-stimulable phosphor plates into dental and dental hygiene radiography curricula.

    PubMed

    Tax, Cara L; Robb, Christine L; Brillant, Martha G S; Doucette, Heather J

    2013-11-01

    It is not known whether the integration of photo-stimulable phosphor (PSP) plates into dental and dental hygiene curricula creates unique learning challenges for students. The purpose of this two-year study was to determine if dental hygiene students had more and/or different types of errors when using PSP plates compared to film and whether the PSP imaging plates had any particular characteristics that needed to be addressed in the learning process. Fifty-nine first-year dental hygiene students at one Canadian dental school were randomly assigned to two groups (PSP or film) before exposing their initial full mouth series on a teaching manikin using the parallel technique. The principal investigator determined the number and types of errors based on a specific set of performance criteria. The two groups (PSP vs. film) were compared for total number and type of errors made. Results of the study indicated the difference in the total number of errors made using PSP or film was not statistically significant; however, there was a difference in the types of errors made, with the PSP group having more horizontal errors than the film group. In addition, the study identified a number of unique characteristics of the PSP plates that required special consideration for teaching this technology.

  12. Nearly two decades using the check-type to prevent ABO incompatible transfusions: one institution's experience.

    PubMed

    Figueroa, Priscila I; Ziman, Alyssa; Wheeler, Christine; Gornbein, Jeffrey; Monson, Michael; Calhoun, Loni

    2006-09-01

    To detect miscollected (wrong blood in tube [WBIT]) samples, our institution requires a second independently drawn sample (check-type [CT]) on previously untyped, non-group O patients who are likely to require transfusion. During the 17-year period addressed by this report, 94 WBIT errors were detected: 57% by comparison with a historic blood type, 7% by the CT, and 35% by other means. The CT averted 5 potential ABO-incompatible transfusions. Our corrected WBIT error rate is 1 in 3,713 for verified samples tested between 2000 and 2003, the period for which actual number of CTs performed was available. The estimated rate of WBIT for the 17-year period is 1 in 2,262 samples. ABO-incompatible transfusions due to WBIT-type errors are avoided by comparison of current blood type results with a historic type, and the CT is an effective way to create a historic type.

  13. Type II universal spacetimes

    NASA Astrophysics Data System (ADS)

    Hervik, S.; Málek, T.; Pravda, V.; Pravdová, A.

    2015-12-01

    We study type II universal metrics of the Lorentzian signature. These metrics simultaneously solve vacuum field equations of all theories of gravitation with the Lagrangian being a polynomial curvature invariant constructed from the metric, the Riemann tensor and its covariant derivatives of an arbitrary order. We provide examples of type II universal metrics for all composite number dimensions. On the other hand, we have no examples for prime number dimensions and we prove the non-existence of type II universal spacetimes in five dimensions. We also present type II vacuum solutions of selected classes of gravitational theories, such as Lovelock, quadratic and L({{Riemann}}) gravities.

  14. Using warnings to reduce categorical false memories in younger and older adults.

    PubMed

    Carmichael, Anna M; Gutchess, Angela H

    2016-07-01

    Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.

  15. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  16. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  17. Residential magnetic fields predicted from wiring configurations: II. Relationships To childhood leukemia.

    PubMed

    Thomas, D C; Bowman, J D; Jiang, L; Jiang, F; Peters, J M

    1999-10-01

    Case-control data on childhood leukemia in Los Angeles County were reanalyzed with residential magnetic fields predicted from the wiring configurations of nearby transmission and distribution lines. As described in a companion paper, the 24-h means of the magnetic field's magnitude in subjects' homes were predicted by a physically based regression model that had been fitted to 24-h measurements and wiring data. In addition, magnetic field exposures were adjusted for the most likely form of exposure assessment errors: classic errors for the 24-h measurements and Berkson errors for the predictions from wire configurations. Although the measured fields had no association with childhood leukemia (P for trend=.88), the risks were significant for predicted magnetic fields above 1.25 mG (odds ratio=2.00, 95% confidence interval=1.03-3.89), and a significant dose-response was seen (P for trend=.02). When exposures were determined by a combination of predictions and measurements that corrects for errors, the odds ratio (odd ratio=2.19, 95% confidence interval=1.12-4.31) and the trend (p =.007) showed somewhat greater significance. These findings support the hypothesis that magnetic fields from electrical lines are causally related to childhood leukemia but that this association has been inconsistent among epidemiologic studies due to different types of exposure assessment error. In these data, the leukemia risks from a child's residential magnetic field exposure appears to be better assessed by wire configurations than by 24-h area measurements. However, the predicted fields only partially account for the effect of the Wertheimer-Leeper wire code in a multivariate analysis and do not completely explain why these wire codes have been so often associated with childhood leukemia. The most plausible explanation for our findings is that the causal factor is another magnetic field exposure metric correlated to both wire code and the field's time-averaged magnitude. Copyright 1999 Wiley-Liss, Inc.

  18. Monomeric, porous type II collagen scaffolds promote chondrogenic differentiation of human bone marrow mesenchymal stem cells in vitro

    NASA Astrophysics Data System (ADS)

    Tamaddon, M.; Burrows, M.; Ferreira, S. A.; Dazzi, F.; Apperley, J. F.; Bradshaw, A.; Brand, D. D.; Czernuszka, J.; Gentleman, E.

    2017-03-01

    Osteoarthritis (OA) is a common cause of pain and disability and is often associated with the degeneration of articular cartilage. Lesions to the articular surface, which are thought to progress to OA, have the potential to be repaired using tissue engineering strategies; however, it remains challenging to instruct cell differentiation within a scaffold to produce tissue with appropriate structural, chemical and mechanical properties. We aimed to address this by driving progenitor cells to adopt a chondrogenic phenotype through the tailoring of scaffold composition and physical properties. Monomeric type-I and type-II collagen scaffolds, which avoid potential immunogenicity associated with fibrillar collagens, were fabricated with and without chondroitin sulfate (CS) and their ability to stimulate the chondrogenic differentiation of human bone marrow-derived mesenchymal stem cells was assessed. Immunohistochemical analyses showed that cells produced abundant collagen type-II on type-II scaffolds and collagen type-I on type-I scaffolds. Gene expression analyses indicated that the addition of CS - which was released from scaffolds quickly - significantly upregulated expression of type II collagen, compared to type-I and pure type-II scaffolds. We conclude that collagen type-II and CS can be used to promote a more chondrogenic phenotype in the absence of growth factors, potentially providing an eventual therapy to prevent OA.

  19. Evaluation of Bone Thickness and Density in the Lower Incisors' Region in Adults with Different Types of Skeletal Malocclusion using Cone-beam Computed Tomography.

    PubMed

    Al-Masri, Maram M N; Ajaj, Mowaffak A; Hajeer, Mohammad Y; Al-Eed, Muataz S

    2015-08-01

    To evaluate the bone thickness and density in the lower incisors' region in orthodontically untreated adults, and to examine any possible relationship between thickness and density in different skeletal patterns using cone-beam computed tomography (CBCT). The CBCT records of 48 patients were obtained from the archive of orthodontic department comprising three groups of malocclusion (class I, II and III) with 16 patients in each group. Using OnDemand 3D software, sagittal sections were made for each lower incisor. Thicknesses and densities were measured at three levels of the root (cervical, middle and apical regions) from the labial and lingual sides. Accuracy and reliability tests were undertaken to assess the intraobserver reliability and to detect systematic error. Pearson correlation coefficients were calculated and one-way analysis of variance (ANOVA) was employed to detect significant differences among the three groups of skeletal malocclusion. Apical buccal thickness (ABT) in the four incisors was higher in class II and I patients than in class III patients (p < 0.05). There were significant differences between buccal and lingual surfaces at the apical and middle regions only in class II and III patients. Statistical differences were found between class I and II patients for the cervical buccal density (CBD) and between class II and III patients for apical buccal density (ABD). Relationship between bone thickness and density values ranged from strong at the cervical regions to weak at the apical regions. Sagittal skeletal patterns affect apical bone thickness and density at buccal surfaces of the four lower incisors' roots. Alveolar bone thickness and density increased from the cervical to the apical regions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

Top